As AI systems start to dominate the marketplace, concerns regarding accuracy and precision are becoming more prevalent. The convenience of these systems is undeniable: They can answer complex questions in minutes, save us time and help us create content. But what if the information you're relying on isn't just wrong—it's completely fabricated? AI models are designed to sound right even when they're shooting from the hip, so they can be extremely convincing. They often present information to justify their position, making it difficult to distinguish fact from fiction. This raises another question: Can you trust AI with complex, high-stakes tasks?
What causes hallucinations?
These errors or—as they're called in the industry, hallucinations—are often attributed to knowledge gaps caused by the parameters and information loaded into the system. What’s often overlooked is the fact that AI is designed to keep you coming back for more by, in short, making you happy.
In the case of knowledge gaps, you can train AI to successfully identify the make and model of a vehicle on vast amounts of images, but it may identify other items as a vehicle because it doesn't have context. In the case of making its users happy, if the user doesn’t point out that the returned information is wrong, the AI will not acknowledge the strength of its results or, in some cases, even deny it made a mistake.
AI is also capable of generating extremely complex, detailed and convincing lies. OpenAI released a report that essentially said that when AI is punished for lies, it learns to lie better. AI systems fill knowledge gaps by predicting plausible information based on patterns. The takeaway? While hallucinations may seem like lies, they're simply gaps in its data or the expression of unintended sub-objectives inherent in all AI.
Recently, I tested the advanced deep research capabilities of OpenAI to validate some information for an article I was working on. I prompted the model to provide trends and citations “on how AI is transforming factories into Industry 4.0.” After approximately 15 minutes, I received a collegiate-level report that detailed trends and cited case studies from various consulting firms and manufacturers I was familiar with. Overall, it was a highly engaging read that caught my attention. The statistics seemed sound, the application seemed relevant and the quotes were ideal, as if they were tailored to my request. The problem was my deep research contained heavily fabricated facts and citations that linked to sources that were not relevant and a completely fabricated case study that coincidentally was for a company that is my client.
What is the first step in protecting your business from hallucinations?
First and foremost, it's important to verify the information presented by any AI model. Be critical when dealing with topics like finance, healthcare or anything that is materially impactful. Always cross-reference from multiple sources if you're not familiar with the topic. Double-check with another source and have AI review its own information to ensure it's correct. I often tell ChatGPT to review its output as an overly critical boss. Oftentimes, you'll find that the system will identify some of the initial gaps.
How can you improve your prompts to mitigate hallucinations?
When using AI, be very specific. When you're prompting, break down complex prompts into smaller prompts. Use these prompts to build upon each other so that you can refine results as they are generated. Tell your AI engine what you want to see, how you want to see it and, more importantly, what you do not want to see. Provide basic guardrails to ensure the AI model is looking at the correct information when formulating its response.
Read the full article published in Forbes Business Council.
Have a Question?
Complete this form to ask our professionals a question.
By submitting this form, you agree to be contacted by UHY.