Prompted to Lie? Evaluating How Prompt Specificity Influences Hallucinations in Large Language Models
This project evaluates the effect of prompt specificity on hallucination in large language models across multiple areas of expertise. The goal is to quantify how greater precision in prompting affects the model’s tendency to generate false information. This study attempts to determine whether customized prompting strategies can be used as a scalable way to improve AI factual accuracy by comparing outputs across domains like biology, history, and law.
Interns: Neal Shandilya and Jack Balciunas
Mentors: Amber Mills and Maitreyee Majumdar (AMDS)