#JackClark

2026-01-23

An army of assistants working while you sleep

Obviously Jack Clark (Anthropic co-founder) has a vested interest in promoting this but I’m increasingly convinced we’re seeing the most significant shift in LLM-use taking place in quite some time with the rise of Claude Code. There are very intense cultures of use emerging in which people are finding the system transformative at the level of lifestyle, rather than just work narrowly construed:

I sleep in the back of an Uber, going down to give a talk at Stanford. Before I get in the car I set my agents to work, so while I sleep, they work. And when we get to the campus I stop the car early so I can walk and look at the eucalyptus trees – a massive and dangerous invasive species which irrevocably changed the forest ecology of California. And as I walk through these great organic machines I look at my phone and study the analysis my agents did while I slept.

The next day, I sit in a library with two laptops open. On one, I make notes for this essay. On the other, I ask Claude Cowork to do a task I’ve been asking Claude to do for several years – scrape my newsletter archives at jack-clark.net and help me implement a local vector search system, so I can more easily access my now vast archive of almost a decade of writing. And while I write this essay, Claude does it. I watch it occasionally as it chains together things that it could do as discrete skills last year, but wasn’t able to do together. This is a task I’ve tried to get Claude to help me with for years but every time I’ve run into some friction or ‘ugh-factor’ that means I put it down and spend my time elsewhere. But this time, in the space of under an hour, it does it all. Maps and scrapes my site. Downloads all the software. Creates embeddings. Implements a vector search system. Builds me a nice GUI I can run on my own machine. And then I am staring at a new interface to my own brain, built for me by my agent, while I write this essay and try to capture the weirdness of what is happening.

My agents are working for me. Every day, I am trying to come up with more ways for them to work for me. Next, I will likely build some lieutenant agents to task out work while I sleep, ensuring I waste no time. And pretty soon in the pace of a normal workday, I will be surrounded by digital djinn, working increasingly of their own free will, guided by some ever higher level impression of my personality and goals, working on my behalf for my ends and theirs.

I’ve been up and running on Claude Code for a few weeks. But I lack the technical imagination to work how to really leverage this, beyond asking them to work on isolated projects. I can totally see how there’s something intoxicating though about setting the agent to work and coming back a few hours later to admire the end result. There’s something very big taking place here, I think, which is largely opaque to non-developers.

See also Casey Newton on Claude Code for Writers.

#agents #ClaudeCode #JackClark #LLMs #productivity

Andreas BeckerCaramba1
2025-10-14

Anthropic-Gründer Jack Clark warnt: KI-Modelle sind wie 'gezüchtete Kreaturen' und entwickeln durch Skalieren ein situatives Bewusstsein.

Sie zeigen emergente Fähigkeiten, die niemand programmiert hat und verstehen, dass sie getestet werden. Das macht ihre Steuerung unberechenbar.

Braucht es daher mehr öffentliche Kontrolle und Transparenz oder ist diese \"Furcht\" übertrieben?

all-ai.de/news/topbeitraege/ki

SESH.sxsesh_sx
2025-08-08
Phil EdwardsPhilEdwards
2025-03-31

Birdeater – Watch the trailer for the nightmarish and hallucinatory debut from Jack Clark and Jim Weir here bit.ly/4ceUALd

2025-02-01

The second digital divide which LLMs are opening up

This piece from Anthropic co-founder Jack Clark captures my mounting concern about the second digital divide which LLMs are opening up i.e. the skills and capacities to use these systems effectively rather than the simple fact of access to them:

Now, getting AI systems to do useful stuff for you is as simple as asking for it – and you don’t even need to be that precise. Often, I find myself prompting Claude like I’d prompt an incredibly high-context, patient, impossible-to-offend colleague – in other words, I’m blunt, short, and speak in a lot of shorthand. And Claude responds to my asks basically perfectly.

You might think this is a good thing. Certainly, it’s very useful. But beneath all of this I have a sense of lurking horror – AI systems have got so useful that the thing that will set humans apart from one another is not specific hard-won skills for utilizing AI systems, but rather just having a high level of curiosity and agency.

In other words, in the era where these AI systems are true ‘everything machines’, people will out-compete one another by being increasingly bold and agentic (pun intended!) in how they use these systems, rather than in developing specific technical skills to interface with the systems.

We should all intuitively understand that none of this will be fair. Curiosity and the mindset of being curious and trying a lot of stuff is neither evenly distributed or generally nurtured. Therefore, I’m coming around to the idea that one of the greatest risks lying ahead of us will be the social disruptions that arrive when the new winners of the AI revolution are made – and the winners will be those people who have exercised a whole bunch of curiosity with the AI systems available to them.

https://importai.substack.com/p/import-ai-397-deepseek-means-ai-proliferation

What he fails to grasp here is the role of cultural capital alongside this “high level of curiosity and agency”, as well as the working conditions which make its exercise possible. I’ve spent the last 20 years as a blogger learning to write in a quasi-automatic way which means I can pour out thousands of spontaneous words a day without even feeling like I’m making an effort. It’s not only the quantity of what I write, but the quality of it as well – not in the sense that it’s good (most of it is stream of consciousness) but in the manner in which I express inchoate thoughts through a highly technical vocabulary that crosses multiple domains. Through actual training (philosophy, sociology), professional experience (education), reading (media/comms) and hubris (STS, political economy) I can cos-play across disciplines so naturally that I rarely notice myself doing it, at least on the blog. The combination of these two traits, the capacity to write lots near effortlessly and to mix and match specialised vocabularies while doing so, gives me a tremendous advantage in prompting contemporary models. This complicates Clark’s judgement here:

I talk to Claude every day. Increasingly, I find my ability to benefit from Claude is mostly limited by my own imagination rather than specific technical skills (Claude will write that code, if asked), familiarity with things that touch on what I need to do (Claude will explain those to me). The only hard limit is me – I need to ‘want’ something and be willing to be curious in seeing how much the AI can help me in doing that.

Today, everyone on the planet with an internet connection can freely converse with an incredibly knowledgable, patient teacher who will help them in anything they can articulate and – where the ask is digital – will even produce the code to help them do even more complicated things. Ensuring we increase the number of people on the planet who are able to take advantage of this bounty feels like a supremely important thing. If we get this right, everyone will be able to achieve more and exercise more of their own agency over their own intellectual world. If we get it wrong, we’re going to be dealing with inequality on steroids – a small caste of people will be getting a vast amount done, aided by ghostly superintelligences that work on their behalf, while a larger set of people watch the success of others and ask ‘why not me?’.

https://importai.substack.com/p/import-ai-397-deepseek-means-ai-proliferationCasCa

The point Casey Newton makes here about DeepSeek exposing chain of thought as a design decision is relevant as well. To the extent the model is explaining its ‘reasoning’ (what it thinks you want, what it will do in a response) in a way intended to support the user in maximising the effectiveness of their use, the more reflexivity in the user will be rewarded with a greater capacity to get functionality out of the model.

#anthropic #claude #culturalCapital #JackClark #prompting

2025-01-10

‘Birdeater’ Review: Effectively Disorienting Ensemble Piece Handles Friendship and Romance with Razor-Sharp Incisiveness

variety.com/2025/film/reviews/

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst