#LLMLimitations

2025-10-23

Only 250 poisoned documents can compromise models across all model and dataset sizes.

Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples

arxiv.org/abs/2510.07192

2025-10-06

Deloitte refunds Aussie gov after AI fabrications slip into $440K welfare report

theregister.com/2025/10/06/del

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst