AI And Google’s Product Ratings: A Comprehensive Guide To The Latest Policy Changes

It’s difficult to ignore the impact that the AI revolution is already having on the tech world.

Whether it’s fear over jobs being lost or excitement regarding the kinds of innovations AI could bring to the tech space in the future, it’s fair to say that Silicon Valley and the wider tech sphere are buzzing about the implications this technology might have.

However, along with the excitement, there’s also a palpable sense of worry, especially among some more sceptical observers who believe that AI could have far-reaching ramifications, and that those ramifications could be difficult to walk back once they’re in motion.

Perhaps one piece of evidence to this effect is an upcoming (at time of writing) policy change implemented by Google for its Product Ratings policies.

This change alters Google’s Product Ratings system in order to clarify certain elements of the policy and emphasise various aspects of its operation, as well as to add a brand new policy that specifically focuses on AI.

Let’s take a look at what’s changed in the latest Google Product Ratings policy shift, which comes into effect on August 28th!

A new Automated Content policy has been added

The biggest and most obvious change to the Product Ratings policies is the addition of a brand new clause.

Marked “Automated Content”, the policy points to automated programs or “artificial intelligence application[s]” generating reviews for products.

Google outright disallows reviews on this basis, and if you manage to identify a review in your feed that was generated by an automated program, then you should mark it with the “<is_spam>” attribute so that Google can take appropriate action.

This is likely to prevent products getting false or spurious reviews in order to bolster their status; if an AI or automated program has generated a review, then it’s easier for that review to be faked in order to artificially boost a product’s rating.

More guidance is being provided on enforcement

As well as adding a new Automated Content policy, Google is also adding “more guidance on how we enforce our policies”.

Google says it uses a combination of humans and automation to ensure that content and reviews are policy-compliant, and that automated enforcement uses machine learning algorithms in order to “protect our merchants and users”.

However, when a case becomes more complicated or severe, it’s passed on to human workers who perform evaluations that machines might struggle with due to a requirement for understanding context.

This means that if you’ve got a Product Ratings case that’s particularly difficult, it’s likely to be passed on to a human team, so don’t worry; your livelihood isn’t entirely in the hands of machines and AI.

Google also wants to reiterate that it takes action on “content and reviews that violate our policies”. 

This action could include rejecting reviews that violate policies, as well as “issuing warnings or suspending accounts” if those accounts repeatedly violate policy or do so in a particularly extreme fashion.

In addition, Google says that when an image is flagged for policy violation, the associated review will also be blocked.

Google is clarifying its existing policies as well

The existing policies that are already in place for Product Ratings are also being clarified in the latest policy change.

There’s a whole range of content that will be flagged if it’s submitted as part of a review, including illegal or dangerous content, sexually explicit material, and links to malware or harmful software.

Other contravening content in reviews includes off-topic information, impersonating another user, or infringing on intellectual property rights.

It’s clear that Google takes its policy towards Product Ratings very seriously, and this policy change is not only intended to update the policy but also to remind users that its enforcement is ongoing.

What do these changes mean?

If you’re a page owner and you want to know what these changes mean for you, the answer is “it depends”.

Generally speaking, the change should be positive. The addition of a new clause forbidding AI or program-generated reviews means that you should now only see human-written reviews for your business.

However, if you’ve been using a program to generate fake AI reviews for your business, then you may find that this is going to get much more difficult in future (and rightfully so, arguably).

Google’s changes are likely aimed at creating a more reliable and less saturated environment for Product Ratings; if Google can’t be seen to be a reliable space for reviews, then its trustworthiness will go down, which will in turn make it a less valuable business on the whole.

These policy changes speak to a wider worry that AI-generated content could overwhelm and saturate the internet, thus making much of the content available online much less valuable to human users.

It’s not likely that these policy changes are the last we’ve heard of companies trying to protect both themselves and their users against the implications of a swarm of AI-generated content flooding the internet.

In the case of product reviews, a preponderance of AI-generated ratings could mean that businesses and products don’t receive fair ratings from humans who’ve actually used their services, which would also mean that any reviews you read online of a product or service are essentially meaningless.

As AI and generative models improve, we can expect to see more of these kinds of policy changes being implemented. We’ll have to wait and see what else Google does to reduce the likelihood of AI spam taking over product reviews in future.
In the meantime, you can read all about the new policy change via Google’s official announcement post, and you can see the current policies here.  

Join Our Newsletter

Elevate Your SEO Knowledge: Subscribe for Monthly Insights!