1
0
1
0
1
0
1
0
0
1
1
0
1
0
VulnHub

AI-Powered Cybersecurity Intelligence

It Takes Only 250 Documents to Poison Any AI Model

Source: darkreading | Added:

Researchers have discovered that it takes only 250 documents to effectively manipulate the behavior of a large language model (LLM). This finding indicates that the threshold for poisoning AI models is significantly lower than previously believed.


Impact: Not specified

In the Wild: Unknown

Age: Newly disclosed

Remediation: Not specified

Read Full Original Article →