#hallucinations

💧🌏 Greg CocksGregCocks@techhub.social
2026-02-28

[The WSJ] Let AI Run [Their] Office Vending Machine. It Lost Hundreds Of Dollars.
Anthropic’s Claude ran a snack operation in the WSJ newsroom. It gave away a free PlayStation, ordered a live fish—and taught us lessons about the future of AI agents.
--
wsj.com/tech/ai/anthropic-clau <-- shared media article
--
youtu.be/SpPhm7S9vsQ?si=aJQ2_B <-- shared video
--
[When you get clever journalists to !$%^&*@ with AI… bravo! And this is a very simple situation, vending machines have been around since literally the Roman Empire
“You are using the wrong prompts” and LUDDITES! In the comments in 3… 2… 1…]
#vendingmachine #artificialintelligence #AIHallucination #hallucinations #emperorsnewclothes #ohhhshiny #experiment #contextwindow #AIagent #claude #autonomous #compliance #fish #PlayStation #snackliberationday #knowledgeboundaries #guardrails #redteam #GenAI cynicism
@WSJ @Anthropic @Claude

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2026-02-27

"AI tools are making potentially harmful errors in social work records, from bogus warnings of suicidal ideation to simple “gibberish”, frontline workers have said.

Keir Starmer last year championed what he called “incredible” time-saving social work transcription technology. But research across 17 English and Scottish councils shared with the Guardian has now found AI-generated hallucinations are slipping in.

As scores of local authorities begin to use AI note-takers to accelerate recording and summarisation of meetings with adult and child service users, a seven-month study by the Ada Lovelace Institute found “some potentially harmful misrepresentations of people’s experiences are occurring in official care records”.

The independent thinktank found that one social worker who had used an AI transcription tool to create a summary said the technology had incorrectly “indicated that there was suicidal ideation”, but “at no point did the client actually … talk about suicidal ideation or planning, or anything”."

theguardian.com/education/2026

#AI #GenerativeAI #AITranscription #SocialWork #UK #Hallucinations

2026-02-26

Gemini and Context Splicing

Reading Time: 3 minutes

Imagine that you're out for a walk. You're looking at the landscape and taking photos with your camera or phone. At the same time your GPS watch, or phone app is recording your location every second. What remains of the walk is a gps track that has a resolution of one set of gps co-ordinates per second, and photos when you took photos. Most of the walk is "lost" because to record video of the entire walk would be too consuming.

To use another image. For many years podcasters would record one and a half hours of show, but they would edit it, keeping just the best thirty minutes. In the case of the blurb to describe a podcast you might have three to five sentences, chapter markers, and possibly a list of hyperlinks.

Trying to Remember a Conversation

AI, too, has a limited attention span. In theory Gemini can accept up to a million tokens but in practice it is quite a bit lower. A token is a snippet or phrase of context. The chattier you are, the more snippets of conversation AI models like Gemini need to understand.

If you're verbose, and Gemini is, then that's a lot of conversation to remember. We forget, and so does the AI model. It starts to mis-remember things.

Context Switching

LLMs have a huge set of data from which to work, so I expect them to see a screengrab from Immich or Photoprism, and to remember which is which. If I tell it "I'm using the laptop for this, and the Pi for that, I expect it to remember and to automatically switch between the two without me telling it. The issue is that it doesn't. You need to say "I'm using the Pi for this" and "I'm using the laptop for that". It doesn't remember the context, especially with longer tasks.

The Gemini Trap

If you're sorting through tens of thousands of files, and you're getting the output from exiftool or other, you're easily coming out with documents with 7 million tokens, but the context window is a million tokens. Gemini doesn't say "Wait, I can't handle such a large data set". It hallucinates instead.

Helping Gemini

When I noticed this I started a new chat. I also asked Gemini, How can I provide you with enough data for you to understand it, without being overwhelmed so it said give me the head and the tail (paraphrased) and I'll draw conclusions from that. From that moment on I tried to give Gemini enough for it to understand the output, without giving it excess information.

Grep and Other Commands

If you have a large data set, instead of feeding it to Gemini as is, you can ask it to help you summarise the data via GREP and other commands. Instead of feeding it a list of jdupes duplicates tell it We have 1200 lines from Source A and 1400 from Source B, and wait for Gemini to provide you with its "thoughts".

On the Others

Google AI Studio, Le Chat, MyAI and Euria will all say "I'm full" and encourage you either to come back later, or to open a new window. Gemini is like the person in the restaurant in a Monty Python sketch that has another bite and explodes.

And Finally

One of the reasons I like Gemini over other models is that it's patient. It will accept a lot of information. The drawback is that it will start hallucinating before saying "i'm full, I need a break". If you're not cautious it can provide you with the wrong command. That can have serious consequences if you don't dry run and check every command before taking away the safety wheels.

In summary, watch for signs of context splicing, and when you spot them get a summary, and move to a new chat and continue from there. I don't know how other AI solutions behave.

#AI #context #economy #efficiency #hallucinations
Boat rib covered in frost
Nizar Kerkeni 🇹🇳 نزار القرقنيnizarus@mastodon.tn
2026-02-25

Merci qui ?

Dans les bibliothèques américaines, les utilisateurs cherchent désormais des livres... qui n’existent pas

bfmtv.com/tech/ia-dans-les-bib

#Hallucinations #IA

Eric Foltineric_foltin
2026-02-25

AI sucks because it’s confident and wrong at the same time. Nothing like getting a polished TED Talk paragraph that’s 30% straight-up hallucinated nonsense. It’s the intellectual version of some dude at a bar explaining crypto like he invented math. Sounds sharp. Knows nothing. Absolute menace.

ericfoltin.com/sn/7514/

2026-02-22

reddit.com/r/kde/comments/1rbr…

Would KDE benefit from an AI assistant?
copilot style assistant, but less annoying


⬇

reddit.com/r/kde/comments/1rbr…

LLMs are trained on stolen work, so you want people to pay KDE specifically to violate people's copyrights.


💯✅👍 💯✅👍 💯✅👍

Fuck Generative AI, coz:

  • largest theft of private IP in human history
  • generates authoritative bullshit via unsolvable hallucination
  • massive water & energy demands as climate crisis intensifies

#AI #LLMs #FuckGenerativeAI #FuckTechBros #CredulousFools #IPTheft #Hallucinations #ClimateCrisis #BiodiversityCrisis #SocialDestruction #FuckCapitalism #weareselfishcruelbastards #wearetotallyfucked #AsteroidNow
#KDE #Plasma

2026-02-22

@IndyMediaAus Fuck Generative AI, coz:

  • largest theft of private IP in human history
  • generates authoritative bullshit via unsolvable hallucination
  • massive water & energy demands as climate crisis intensifies

#AI #LLMs #FuckGenerativeAI #FuckTechBros #CredulousFools #IPTheft #Hallucinations #ClimateCrisis #BiodiversityCrisis #SocialDestruction #FuckCapitalism #weareselfishcruelbastards #wearetotallyfucked #AsteroidNow

2026-02-18

We're paying too high of a price for #LLM #hallucinations and #crypto mining

youtu.be/_bP80DEAbuo

Jean-François Lecaillonjflecaillon@piaille.fr
2026-02-18

Tels des IA, les politiciens d'aujourd'hui ne disent pas "je ne sais pas", ils inventent... et ce sont les électeurs naturels qui hallucinent !!!

#ia #politiciens #hallucinations

laffranchi.canalblog.com/2026/

Led By Gilded FoolsMHowell@kolektiva.social
2026-02-17

Jesus Was a Psychedelic Mushroom, a Controversial Theory Suggests. Could It Reshape Christianity Forever?

"...the New Testament brims with code words for psychedelic rituals. By his logic, every godly encounter, every epiphany, every miracle is an allegory for a trip."

From Popular Mechanics: archive.ph/pmkgR

#Christianity #Bible #NewTestament #Testament #God #Jesus #Christ #Psychedelic #Mushroom #Mushrooms #Psychedelics #Hallucinations

2026-02-16

The Limitations of AI – Dealing With Personality

Reading Time: 3 minutes

For the last two weeks I have been playing with AI heavily, to get it to help with the task of re-organising my libraries. I played with Gemini, Le Chat and MyAI. I focused on Gemini because it gave me good results, whereas Le Chat gave good answers but I hit the token limit too easily, and MyAI is better, but the answers made me waste time, rather than move forward.

When using Gemini I find that the lines of code it gives me are good. I always run them in dry run first to ensure that behaviour is as expected. It often grates on me that the answers are "since you live in ... and you do a lot of A and B..." before giving information. It also grates me that it keeps saying "And then let's do this" rather than let me finish the task I am currently focused on.

Persistent but Forgetful

It is also erratic in what it remembers and what it doesn't remember. If you tell it something, it will remember it, and repeat it for hundreds of responses, but for other things it forgets instantly.

If I speak about using an HP machine with Photoprism it will keep thinking that I'm using the Pi. It gets fixated. If you tell it "I'm doing this with Photoprism on the HP machine, it doesn't remember.

If I was paying for a limited amount of tokens then this behaviour would make it very expensive, without providing me with the quality of service I would expect for 7CHF per month.

Just now I provided it with a screenshot from Photoprism with text that illustrated the problem of duplicate filenames but rather than provide a usable answer it kept hallucinating modified screenshots. After four hallucinations in a row I started a new chat, and tried to discuss the topic for a fifth time and it hallucinated again so I told it off.

Character

When I use Gemini it reminds me of a former alcoholic bi-polar friend. It loves to pigeon hole you, and remind you of something that is not related to the topic you're getting help with. My cycling and hiking habits are not relevant to dealing with my photo library.

When I tell it "I'm using machine A for task B I expect it to remember within the same chat. It doesn't. It's object fixated on the fact that I use a Pi.

Context Switching

If I designed an AI tool I would teach it to switch between context A and Context B, rather than getting fixated. Context A = Using the Pi, Context B = using the HP machine. It doesn't take on board that I switch from context to context so it gives answers that are filled with wasteful information that is wrong, and irrelevant.

Verbosity

Of course, we can tell AI to be concise, but I'd like it to be context smart. Is the question a simple line of code or a yes or no answer, or did I say I wanted to understand how something works. If AI could automatically detect how concise or verbose to be, that would be fantastic.

Skittish

I found, multiple times that Gemini is skittish. You're going through a task that it knows will take hours but rather than asking "how is the progress going?" it encourages you to skip to the next step. That can be welcome but if you're sorting tens of thousands of photos, it takes hours, so it would be better to focus on the current task before moving on.

If I post about the progress, I don't need a long response. "ok" would be enough. In effect I could simply keep quiet until the task is done and tell it of the result.

  • "Since you're on a Pi, would you like me to show you how to check if the CPU is being "throttled" due to heat while it's crunching these hashes?"

The type of assumption I dislike.

And Finally - Dealing With AI Personalities

One of the things that is rarely discussed is that dealing with AI is dealing with the personality that was programmed into it. The more you interact with the character of that personality, the more it can become toxic. It's good to learn to use some AI models sparingly, to avoid their character flaws becoming toxic. This morning, after my run I found Gemini toxic.

#gemini #hallucinations #imageGeneration #unreliable
The new bike and cycling path
Don Curren 🇨🇦🇺🇦dbcurren.bsky.social@bsky.brid.gy
2026-02-15

What is about the #US that makes it particularly prone to collective #hallucinations? Sort of like a giant #Albania …

Erik JonkerErikJonker
2026-02-15

Reading many posts about "oh, how awful is GenAI , it was wrong and hallucinated", that is the nature of this technology folks... You have to think about the context where you are using it and implement it in a way so you can mitigate this risk, accept it, or choose NOT to use it.

2026-02-13
Scientific Frontlinesflorg
2026-02-13

High-resolution imaging reveals that suppress external visual processing and instead drive visual areas to access the retrosplenial cortex, a region responsible for retrieving memory contents and associations, thereby generating
Psychopharmacology
sflorg.com/2026/02/ns02132601.

2026-02-11

credulous amoral idiot writes article highlighting their credulous amoral idiocy

wired.com/story/malevolent-ai-…

Fuck Generative AI, coz:

  • largest theft of private IP in human history
  • generates authoritative bullshit via unsolvable hallucination
  • massive water & energy demands as climate crisis intensifies

#AI #LLMs #FuckGenerativeAI #FuckTechBros #CredulousFools #IPTheft #Hallucinations #ClimateCrisis #BiodiversityCrisis #SocialDestruction #FuckCapitalism #weareselfishcruelbastards #wearetotallyfucked #AsteroidNow

A review of the proceedings from four major computer-science conferences showed that none from 2021, and all from 2025, had fake citations. arxiv.org/abs/2602.058... #AI #LLMs #Hallucinations #Misconduct #ScholComm

The Case of the Mysterious Cit...

2026-02-11

A review of the proceedings from four major computer-science conferences showed that none from 2021, and all from 2025, had fake citations.
arxiv.org/abs/2602.05867v1

The authors prefer the term "mysterious citations" which they define this way: "No paper [with] a similar enough title exists. The cited location either does not exist or holds an unrelated paper with different authors."

#AI #LLMs #Hallucinations #Misconduct #ScholComm

2026-02-10

journo spends entire essay whingeing about the ai-enshitification of social media, without once addressing actual social media, talking only about cesspit media

we are surrounded & overrun by abject pig-ignorance

abc.net.au/news/2026-02-10/ai-…

#journalism #WriteGooder #socialmedia #cesspitmedia #enshitification #AI #LLMs #FuckGenerativeAI #FuckTechBros #CredulousFools #IPTheft #Hallucinations #ClimateCrisis #BiodiversityCrisis #SocialDestruction #FuckCapitalism #weareselfishcruelbastards #wearetotallyfucked #AsteroidNow

Mastodon Trends South AfricatrendsZA@mastodon.africa
2026-02-07

The following hashtags are trending across South African Mastodon instances:

#Wordle
#wordle1694
#Motivation
#mastodon
#softwaredevelopment
#ai
#transcription
#hallucinations
#jobseekers
#africa

Based on recent posts made by non-automated accounts. Posts with more boosts, favourites, and replies are weighted higher.

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst