Facebook Inc. pulled tens of millions of user posts that violated its terms of service in the past six months, according to its biannual transparency report on Wednesday. This has become a customary company announcement, just like a quarterly earnings report.

What is new is the addition of data from Instagram for the first time that underscores the depth of content violations on the photo-sharing app. Millions of posts on child pornography, the sale of drugs and firearms, terrorism and suicide and self-injury were removed from Instagram in the third quarter, Facebook revealed.

The inclusion of Instagram in its report adds another layer to Facebook’s ever-evolving fight against illegal and objectionable content. Facebook has poured billions of dollars into making its site safer and less vulnerable to outside influences, and breaking out data about Instagram shows the progress it is making when monitoring billions of photos and videos.

In a conference call with reporters, Chief Executive Mark Zuckerberg and members of the Facebook community standards team laid out the numbers for Instagram: Removal of 753,700 pieces of content relating to child nudity or sexual exploitation of children, up 47% from the second quarter; 133,300 instances of terrorist propaganda, up 25%; 1.5 million pieces linked to the sale of drugs, about the same as the second quarter; and 845,000 items related to suicide and self-injury, roughly the same as the previous quarter.

The numbers for Facebook FB, -0.66%  in general give an idea of the sheer scale of illegal content that courses through a platform used by roughly 2.5 billion people worldwide.

Facebook said it removed 1.7 billion fake accounts in the third quarter, down from 2.2 billion in the first quarter. “Because we are blocking more attempts to create fake, abusive accounts before they are even created, there are fewer for us to disable,” Facebook said in the report.

The social-networking giant said it expunged 11.6 million pieces of content related to child pornography in the quarter ended in September. (Facebook said its software algorithms and heavy dependence on artificial intelligence was able to detect 99% of that content.)

The company removed 4.4 million pieces of content related to drug sales and 2.3 million related to firearm sales in the third quarter — up from 841,000 and 609,000, respectively, six months earlier.

One interesting note is that terrorism propaganda was harder to identify on Instagram than on Facebook. Facebook said it detected and removed 98.5% of all terrorism content — on Instagram, the figure was 92.2%.

All told, the numbers illustrate the flood of challenges Facebook and other social-networking services like Snap Inc. SNAP, +0.63%  and Twitter Inc. TWTR, +0.10%  constantly face. Facebook’s status as the largest communications platform has led to several high-profile congressional hearings in which Zuckerberg has had to explain to federal lawmakers what measures his company is taking to clean up objectionable content.