Google's search team recently announced a new feature called AI Overviews. The aim was to harness the latest generative artificial intelligence (genAI) technologies to improve the search experience. Currently only available to users in the US, these overviews allow users to ask more complex questions and receive higher-quality search results.
The new feature has not been without its issues, particularly with some erroneous and nonsensical overviews appearing over the past week. Google has acknowledged these errors and reiterated its commitment to maintaining a high standard of accuracy. According to Liz Reid, Vice President and Head of Google Search, the company appreciates user feedback and takes concerns seriously. This feedback has been pivotal in identifying areas for improvement.
AI Overviews differ significantly from other AI chatbots and large language model (LLM) products. They are not designed to generate outputs based merely on training data but are integrated with Google's core web ranking systems to provide high-quality search results. The information included in AI Overviews is corroborated by top web results to minimise inaccuracies. Despite these measures, Google admits that AI Overviews can occasionally misinterpret queries or language nuances, which can result in incorrect information being shown.
Google says the new feature's testing was extensive, involving rigorous evaluations and red-teaming efforts to address potential issues before its launch. However, with millions of users engaging with the feature, novel and sometimes nonsensical search queries have surfaced. These queries have produced some odd AI Overviews, which are most often related to data voids or information gaps on unusually unique topics.
An example highlighted by Google involved the query "How many rocks should I eat?", which yielded a satirical response linked to a geological software provider's website. Such instances indicate challenges in filtering out less serious content, and Google has since made numerous technical improvements to address these issues.
Among the improvements are better detection mechanisms for nonsensical queries, restrictions on user-generated content that might offer misleading advice, and additional safeguards for health and news topics. Google's updates aim to prevent potentially harmful content and misinformation from appearing in AI Overviews. The company has enforced triggering restrictions for specific queries and data points that were found to be less helpful, particularly in areas where freshness and factual accuracy are paramount.
Additionally, Google has implemented additional measures to swiftly address and rectify any violations of its content policies. The number of violations found, according to Google, was remarkably low, with less than one violation per seven million unique queries. Time will tell if this improves, as we have all got used to and rely upon extremely accurate search results.