malicious files tagged posts

Size doesn’t matter: Just a small number of malicious files can corrupt LLMs of any size

Size doesn't matter: just a small number of malicious files can corrupt LLMs of any size
Overview of our experiments, including examples of clean and poisoned samples, as well as benign and malicious behavior at inference time. (a)DoS pretraining backdoor experiments. Credit: arXiv (2025). DOI: 10.48550/arxiv.2510.07192

Large language models (LLMs), which power sophisticated AI chatbots, are more vulnerable than previously thought. According to research by Anthropic, the UK AI Security Institute and the Alan Turing Institute, it only takes 250 malicious documents to compromise even the largest models.

The vast majority of data used to train LLMs is scraped from the public internet. While this helps them to build knowledge and generate natural responses, it also puts them at risk from data poisoning attacks...

Read More