It Takes Only 250 Documents to Poison Any AI Model
Researchers have discovered that it takes only 250 documents to effectively manipulate the behavior of a large language model (LLM). This finding indicates that the threshold for poisoning AI models is significantly lower than previously believed.