fbpx

Microsoft’s Proposed Patent Aims to Tackle AI’s Truth Problem

Microsoft has unveiled a plan to address one of AI’s most pressing challenges: reducing or eliminating “hallucinations,” or false information generated by language models. In a recent patent application, Microsoft researchers detailed a new method designed to enhance AI accuracy by integrating external knowledge and user feedback mechanisms.

The patent, titled Interacting with a Language Model Using External Knowledge and Feedback, was filed with the U.S. Patent and Trademark Office last year and made public on October 31. The proposed approach introduces a “response-augmenting system” (RAS) that would enable AI models to pull additional information from external sources when generating answers to user queries. This external check could help the model assess whether it has provided a “useful” response by consulting verified online sources or structured datasets.

If the AI’s initial answer lacks credibility or completeness, the RAS could flag it and prompt the AI to inform the user that the response may be lacking. Users would also be able to provide direct feedback to further improve the answer, creating a more transparent and interactive experience.

Notably, this method doesn’t require extensive model fine-tuning, which is typically time-consuming and costly. Instead, it relies on a separate mechanism that connects with an AI model’s response generation to verify facts dynamically.

The issue of hallucinations has become a focal point in AI development, as generative models sometimes produce responses that are either inaccurate or misleading. Hallucinations have previously undermined user trust in AI, with high-profile examples from Google’s AI Overviews and X’s Grok AI—including advising users to eat rocks or spreading misinformation around elections—highlighting the need for reliable and accurate AI outputs.

Also Read:  Adobe's New Content Authenticity Web App: Safeguarding Creators in the Age of AI

Microsoft’s innovative approach could mark a significant step forward in making AI more dependable, potentially setting a new standard for AI reliability and accountability in the tech industry.

AI was used to generate part or all of this content - more information