Most of the 583 million fake accounts Facebook disabled in Q1 were disabled "within minutes of registration". Three to 4 percent of Facebook's monthly users are fake.
The site also said it took down 21 million pieces of nudity and sexual activity-related content, 96% of which was found and flagged by its systems before being reported.
The posts that keep the Facebook reviewers the busiest are those showing adult nudity or sexual activity - quite apart from child pornography, which is not covered by the report.
Improved technology using artificial intelligence had helped it act on 3.4 million posts containing graphic violence, almost three times more than it had in the last quarter of 2017.
Facebook on Tuesday released numbers on the kinds of content-and how much of it-the company has removed in recent months. All 836 million spam posts were flagged by an artificial intelligence program before human users reported them, according to the report.
It said the growth was a possible result of a higher volume of graphic violence content shared on Facebook in the first three months of this year.
Facebook says it has more than 2 billion users.
Facebook, which like Google is now working on A.I. technology to identify "hate speech", admitted that its current hate-speech-detection technology "still doesn't work that well" and that automatically flagged content "needs to be checked by our review teams".
Facebook also released statistics that quantified how pervasive fake accounts have become on its influential service, despite a long-standing policy requiring people to set up accounts under their real-life identities.
Only 38 percent of these were flagged by automation, which fails to interpret nuances like counter speech, self-referential comments or sarcasm.
"As Mark Zuckerberg said at F8, we have a lot of work still to do to prevent abuse, .It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important". "In other words, of every 10,000 content views, an estimate of 22 to 27 contained graphic violence", the report said.
Facebook acknowledged it has work to do when it comes to properly removing hate speech.
"We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too", he said.
Facebook has been in hot water following allegations of data privacy violations by Cambridge Analytica, an election consultancy that improperly harvested information from millions of Facebook users for the Brexit campaign and Donald Trump's United States presidency bid.