AI-generated content is the hot topic of the moment. Many sites are turning to AI to generate large volumes of content that would normally be created by humans; it’s especially useful, in many people’s eyes, for “formulaic” content that might be mundane or generic for humans, but that an AI can create with no trouble. With that said, there are debates around whether AI generation should be used at all; some say that it’s taking work away from human writers and journalists, while others believe that the advances we’re making in the world of AI are inevitable and that they should be harnessed.
Wherever you personally stand, the fact is that search engines and other algorithms might become more hostile towards AI content in the future. Google has declared that, right now, it won’t penalise high-quality AI content, i.e. content that meets its E-E-A-T guidelines. Even so, there’s no guarantee this policy won’t change in the future. Here’s our rundown on whether Google will penalise sites overusing AI-written content in the future, and what you can do about it.
AI content detection is getting more sophisticated
As AI content evolves, so too do the tools being used to detect it. AI content detection tools such as Originality do a great job of picking up on AI content generation; their algorithms look for common features in AI-generated text, flagging up when an article was probably not written by a human.
Of course, these tools aren’t totally capable of detecting whether text is AI-written yet; we may be a long way away from that particular development. Still, they’re useful for understanding when you might be dealing with AI-generated text, and if more and more people decide to use tools like this one in order to spot AI text, then there’s a chance sites could fall in the rankings, especially if people choose to value AI content lower than human-created content.
Google’s stance towards AI-generated content isn’t hostile right now
At the moment, Google’s stance when it comes to AI-generated content isn’t hostile. The search engine says that if content meets its guidelines for high quality, then it won’t penalise the sites that are hosting that content, even if the vast majority of it is AI-generated. This means it could get difficult to identify whether content is AI-generated or not, which is where the aforementioned AI detection tools come in.
Google not being hostile towards AI is probably a product of the company developing its very own ChatGPT-style large language model, Bard, which was unveiled recently. Google now has a horse in the AI race, and so it makes sense that the company wouldn’t want to push too hard against AI content; after all, its own AI will probably evolve to the point where it’s capable of generating such content, and Google wouldn’t want to be seen fighting against its own business.
AI can make mistakes, and that’s a problem
Humans are, of course, capable of making mistakes. It’s one of the things that defines us as a species; unlike computers, our brains aren’t created along strict systems or protocol lines, and so they can randomly misfire or create connections between things that were never really there. AI is also capable of making mistakes, though, and that’s where overly relying on AI content generation becomes an issue.
Recently, Google’s Bard AI returned a wrong answer in an advert designed to showcase the capabilities of the AI. As you can imagine, this was a bit of an embarrassment for Google; it wiped $100 billion off the company’s share price practically overnight, and it led many to question whether services like Bard are really ready to be released yet. ChatGPT is also capable of making mistakes, and it’s something that users are already conscious of.
The mistakes AI makes could lead Google and other search engines to penalise sites for using it extensively, but that’s not looking likely. After all, many users actually can’t tell the difference between AI-generated content and articles written by humans, and mistakes are par for the course in regular human writing as well. The fact that AI-generated articles could contain serious factual errors may, in fact, not be a factor for search engine optimisation going forward.
Google may not penalise AI content, but it could help you with detection
While it’s true that Google’s stance towards AI content appears friendly right now, that’s not to say that the potential dystopian future in which AI content is indistinguishable from human content is an inevitability. There’s a good chance that Google, and other search engines, will create better and more sophisticated tools for spotting whether content has been created by an AI, possibly displaying a disclaimer of some sort on pages that are AI-generated.
This would help in a number of ways. First, it would hew to Google’s refusal to penalise AI content on websites. Secondly, it would show users that a page has been generated by AI, so any factual errors could be checked and the article could be viewed in a different light. Thirdly, it would also inform users that the site they are on is one that uses AI content, which could in turn change the way they think about that site and affect search engine rankings organically. Even if Google doesn’t outright penalise AI content, expect to see an increased emphasis on detection in future.