@jackwilliambell @lain_7
The paper argues that agent systems can’t solve problems like the travelling salesman problem because of fundamental limits. That sounds impressive, but it’s really just computer science 101. The travelling salesman problem is NP-hard, which means no computer can efficiently solve every possible case unless P = NP, a question that is still unsolved.
👉 This limitation applies to all computing, not just agents or language models👈 .
In the real world, nobody tries to solve every case perfectly. Practical programs use shortcuts: heuristics, approximations, and clever pruning. That’s how routing software, logistics systems, and scheduling tools work. They don’t find the perfect answer every time, but they find good answers fast enough to be useful. Agentic systems do exactly the same thing.
👉 The paper also mixes up two different ideas👈 . Computational difficulty explains why some problems are slow to solve perfectly. It does not explain why a system might confidently give a wrong answer. Those errors come from lack of checking, missing information, or bad assumptions, not from NP-hardness.
So the core claim is: hard problems are hard for everyone. That’s true, but it’s not a special weakness of #agentic_ai systems, and it doesn’t really explain the kinds of failures the paper is trying to blame on theory.
TLDR: The paper is shit, thats why no one serious peer-reviewed it. What is super embrassing that #Wired and #Futurism parroted it.
I expect to keep encountering reposts of this shite for the next 6 months like that "AI atriphies your critical reasoning" preprint from MIT.
There is plenty to hang dogs on #AI and #LLM, no need to make silly arguments against it.
#RegulateAI