Here’s an interesting test that you can carry out if you have the time and inclination: Go to a site like Facebook and scroll down through the feed to see how many posts you think were written by AI. It’s sometimes difficult to tell, but there are certain signs, including the syntax of the post. Most of us could care less about formal grammar rules on social media, so if a post is full of Oxford commas and em dashes, it’s likely to have been generated by AI.
It’s indisputable that this is happening. One study suggests that 40% of posts on Facebook are written with AI assistance, and Meta is introducing tools to help users generate AI-generated posts. However, the question remains: how much of this should be viewed as a problem? The answer is that it depends on how you look at it and social media in general. Theoretically, it could lead to wider problems like mass misinformation, but the most pressing issue is the problem of repetitive, boring AI ‘slop.’
Of course, social media isn’t the only form of online entertainment. We can point to everything from music streaming to shopping to playing on social casino platforms as sources of online entertainment, but social media remains the repository for human connection in a digital world, and if we approach a point where almost half those interactions do not have a human behind them, it somewhat defeats the purpose.
We cited Facebook as one of the main culprits above, but, in truth, it is happening on other platforms, too, albeit in different ways. Facebook tends to have relatively short posts, often focusing on specific interest groups. For example, you might see a ‘Lord of the Rings Fan’ account that constantly churns out facts about the books, movies, and characters. Sometimes, the content will feel worthwhile, but it often feels pointless, like reading a Wikipedia entry over and over again.
LinkedIn and Reddit also full of AI content
LinkedIn’s AI output tends to be more business-orientated, which feels logical, and, once again, it is being pushed by the platform’s owners. LinkedIn’s issue is arguably more pressing than Facebook’s, as the raison d’être for the platform is networking, and if you are networking with a bot, well, it seems a little pointless. As for Reddit, it’s a shame that so much AI content is being created, as the platform’s calling card has always been a source of unique human insight.
<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/Cedj8AKI2U8?si=UofwN2Z8N4UD7n-A” title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen></iframe>
There are calls for social media companies to take action, but it’s unclear if there is a will. Tagging content as AI-generated would be a start, yet it is increasingly difficult to detect it. Secondly, Elon Musk’s takeover of Twitter has prompted other social media companies to do less fact-checking, so there isn’t much of an appetite to start questioning the veracity of AI-generated posts. Yes, features like Community Notes help, but there simply isn’t enough manpower to tackle platforms where millions of AI-generated posts are created daily.
While none of this feels like a clear and present danger to how the internet runs today, it’s worth remembering that things could get worse. Indeed, Bloomberg recently released an opinion piece discussing how AI slot was “killing off” the internet. The piece, which you can read here, extends beyond social media, examining the concept of the internet being powered by human connection, even down to the human-created algorithms used to generate search results, and how this is quickly being usurped by something literally inhuman. It serves as a warning that we may only be seeing the tip of the iceberg.