Cryptocurrency NewsAI21's Contextual Answers: Mitigating Hallucination Problem for Enterprise AI Adoption

AI21’s Contextual Answers: Mitigating Hallucination Problem for Enterprise AI Adoption

AI21 Labs recently released “Contextual Answers,” a question-answering engine designed primarily for enterprise use. This innovative engine, when integrated with large language models (LLMs), allows users to upload their data libraries, enabling the LLM to respond based on specific information.

The launch of AI products like ChatGPT has brought significant changes to the AI industry. However, due to concerns about trustworthiness, many businesses have been hesitant to adopt these technologies.

Research indicates that employees spend almost half of their workdays searching for information, highlighting the potential for chatbots capable of performing search functions. Nonetheless, most existing chatbots are not optimized for enterprise use.

AI21 developed Contextual Answers to bridge the gap between general-use chatbots and enterprise-level question-answering services. It allows users to customize the AI’s responses without the need to retrain models, addressing some of the main barriers to adoption, such as cost, complexity, and the lack of model specialization for organizational data.

One major challenge faced in developing useful LLMs, like ChatGPT and Google’s Bard, is teaching them to express a lack of confidence. Instead of providing uncertain responses like “I don’t know,” these models tend to generate information without a factual basis, a phenomenon known as “hallucinations.”

AI21’s Contextual Answers aims to solve the hallucination problem by either providing information only from user-provided documentation or not providing any response at all, thus enhancing accuracy.

In sectors where accuracy is crucial, such as finance and law, the adoption of generative pre trained transformer (GPT) systems has yielded mixed results. GPT systems have faced caution in the finance industry due to their tendency to hallucinate or mix incorrect information, even when connected to the internet for source linking. Similarly, in the legal sector, reliance on outputs generated by ChatGPT led to fines and sanctions for a lawyer during a case.

By proactively loading AI systems with relevant data and intervening to prevent non-factual information from being generated, AI21 seems to have addressed the hallucination problem, making mass adoption more likely, especially in the fintech sector where traditional financial institutions have been hesitant to embrace GPT tech and the cryptocurrency and blockchain communities have experienced varying degrees of success with chatbots.


Join us

- Advertisement -