Amazon Web Services (AWS) announced Automated Reasoning checks (preview), a new safeguard in Amazon Bedrock Guardrails to help reduce hallucinations in large language models (LLMs) by mathematically validating the accuracy of their responses. This leverages automated reasoning, a field in computer science that uses mathematical proofs and logical deduction to verify the behavior of systems and programs. Unlike machine learning (ML), which makes predictions, automated reasoning provides mathematical guarantees about a system’s behavior. AWS already uses automated reasoning in key service areas such as storage, networking, virtualization, identity, and cryptography. For example, automated reasoning is used to formally verify the correctness of cryptographic implementations, improving both performance and development speed. Now, AWS is applying a similar approach to generative AI. The new Automated Reasoning checks (preview) in Amazon Bedrock Guardrails is the first generative AI safeguard that helps prevent factual errors from hallucinations using logically accurate and verifiable reasoning that explains why generative AI responses are correct. Automated Reasoning checks are particularly useful for use cases where factual accuracy and explainability are important. For example, you could use Automated Reasoning checks to validate LLM-generated responses about human resources (HR) policies, company product information, or operational workflows. Used alongside other techniques such as prompt engineering, Retrieval-Augmented Generation (RAG), and contextual grounding checks, Automated Reasoning checks add a more rigorous and verifiable approach to ensuring that LLM-generated output is factually accurate. By encoding your domain knowledge into structured policies, you can ensure that your conversational AI applications are providing reliable and trustworthy information to your users.
Prevent factual errors from LLM hallucinations with mathematically sound Automated Reasoning checks (preview)
AWS