Dehallucinating Large Language Models Using Formal Methods Guided Iterative Prompting

dc.contributor.authorJha, Susmit
dc.contributor.authorJha, Sumit Kumar
dc.contributor.authorLincoln, Patrick
dc.contributor.authorBastian, Nathaniel D.
dc.contributor.authorVelasquez, Alvaro
dc.contributor.authorNeema, Sandeep
dc.date.accessioned2023-08-14T15:57:55Z
dc.date.available2023-08-14T15:57:55Z
dc.date.issued2023
dc.description.abstractLarge language models (LLMs) such as ChatGPT have been trained to generate human-like responses to natural language prompts. LLMs use a vast corpus of text data for training, and can generate coherent and contextually relevant responses to a wide range of questions and statements. Despite this remarkable progress, LLMs are prone to hallucinations making their application to safety-critical applications such as autonomous systems difficult. The hallucinations in LLMs refer to instances where the model generates responses that are not factually accurate or contextually appropriate. These hallucinations can occur due to a variety of factors, such as the model’s lack of real-world knowledge, the influence of biased or inaccurate training data, or the model’s tendency to generate responses based on statistical patterns rather than a true understanding of the input. While these hallucinations are a nuisance in tasks such as text summarization and question-answering, they can be catastrophic when LLMs are used in autonomy-relevant applications such as planning. In this paper, we focus on the application of LLMs in autonomous systems and sketch a novel self-monitoring and iterative prompting architecture that uses formal methods to detect these errors in the LLM response automatically. We exploit the dialog capability of LLMs to iteratively steer them to responses that are consistent with our correctness specification. We report preliminary experiments that show the promise of the proposed approach on tasks such as automated planning.
dc.description.sponsorshipU.S. Army
dc.identifier.citation- Jha, S., Jha, S., Lincoln, P., Bastian, N., Velasquez, A. & Neema, S. (2023). Dehallucinating Large Language Models Using Formal Methods Guided Iterative Prompting. Proceedings of the 2023 IEEE International Conference on Assured Autonomy, pp. 149-152. IEEE.
dc.identifier.doihttps://doi.org/10.1109/icaa58325.2023.00029
dc.identifier.urihttps://hdl.handle.net/20.500.14216/354
dc.publisherIEEE
dc.relation.ispartof2023 IEEE International Conference on Assured Autonomy (ICAA)
dc.subjectLarge-Language-Models
dc.subjectHallucinations
dc.subjectFormal-Methods
dc.titleDehallucinating Large Language Models Using Formal Methods Guided Iterative Prompting
dc.typeproceedings-article
local.peerReviewedYes

Files