#technologicalReflexivity

2026-01-12

Unconscious incompetence with technology

I really like this concept I was introduced by Terry Hanley, writing about AI and psychotherapy:

When it comes to artificial intelligence and therapy, I’m increasingly struck by how many of us may be operating in a place of unconscious incompetence. Not through negligence or lack of care, but through familiarity. Therapy has always absorbed new tools, new forms of language, new contexts for relating. Technology, in that sense, can feel like just more background noise – something that sits “over there” in admin systems, appointment booking, outcome measures, or risk protocols.

But, and this is quite a big but, AI is arguably not just another tool. It is quietly reshaping how information is produced, filtered, summarised, and interpreted – including information about people’s distress, identities, and lives. And when something becomes woven into the fabric of everyday systems, it becomes easy not to notice what we don’t yet understand.

Unconscious incompetence is a surprisingly comfortable place to be. If we don’t quite see where AI is operating, or we assume it is neutral, peripheral, or someone else’s responsibility, then there is little immediate pressure to engage. The risk, however, is that decisions about therapeutic work – ethical, relational, and practical – are being shaped in ways we haven’t fully thought through.

https://counselling.substack.com/p/a-new-years-resolution-for-therapy?utm_source=post-email-title&publication_id=869300&post_id=183794185&utm_campaign=email-post-title&isFreemail=true&r=hcf3&triedRedirect=true&utm_medium=email

This is exactly how I’ve always seen the challenge of digital scholarship. What I call technological reflexivity is an antidote to unconscious incompetence in the sense of deliberately practicing a reflective orientation to the use of technology in your work. Competence can often result as an outcome of that process but it’s not a necessity for it – what matters is the reflection itself. This maps onto what Terry says here about therapists and AI:

None of this requires perfect knowledge. What it requires is attention, humility, and a willingness to say, “I need to know more about this and understand this better.” This list is of course not comprehensive but some areas that I believe are important for us to have on our radars.

The risk is not that we engage imperfectly, but that familiarity arrives before reflection. Seen this way, moving from unconscious incompetence to conscious competence is less about professional deficit and more about professional positioning. It shows up in small, often unremarkable practices: noticing where technologies are already shaping decisions, being clearer about boundaries in training and supervision, and staying alert to how administrative systems influence therapeutic work.

The phrase “familiarity arrives before reflection” feels like it concisely captures something I’ve been circling around for years without being able to quite express.

#AI #digitalScholarship #GenerativeAIForAcademics #psychotherapy #socialMediaForAcademics #sociotechnicalChange #technologicalReflexivity #TerryHanley #unconsciousCompetence

2026-01-09

How to reduce your use of LLMs in a thoughtful and ethical way

I’m trying to reduce my use of LLMs (beyond auto-ethnographic exploration of each new model) due to a combination of environmental concerns and anxiety about the impending waves of enshittification that are going to break the models. I don’t want to rely on something which I think is going to get ever more unreliable over the coming years. Here are a few practical techniques:

  • Take periodic breaks from LLMs (e.g. for a week) in order to reset your practice. This helps you identify the extent to which you’ve started to cognitively outsource and gives you an opportunity to reconnect with doing things yourself.
  • Go through conversations and list the different ways in which you’ve used LLMs over the last month. Which of these uses are you comfortable with? Make a commitment to excluding these entirely from your use of the models.
  • Insert instructions into the personalisation settings to ask the model to steer you in these directions. E.g. “I only want to use you to give feedback on ideas I share. You should never offer to write things on my behalf. If the conversation is drifting towards a wider role then please bring it back to your role of reviewer.”
  • It’s easy for conversations with LLMs to proliferate. Once you’ve identified your appropriate uses of the model then try and keep a limited number of conversations matched to these uses. This helps you make the most of in-context learning and keeps your use focused on a narrow set of projects. For example always use a single thread for one writing project because the model’s understanding gets more refined as you write more.
  • Removing it from your phone. How we relate to mobile apps is fundamentally different from desktops app. They are ready-to-hand, usually always with us and it’s harder to write thoughtfully and at length. If you want to use LLMs in a restricted way you really need to remove the apps from your phones.

Before you try these techniques it might be helpful to do a bit of an audit:

  • Which LLMs do you use?
  • How often do you use them?
  • What do you talk to them about?
  • How has this changed over time?
  • What needs is it serving?
  • What are the consequences?

In our upcoming book Milan Sturmer and I argue that long-term use of LLMs tends to drift towards the affective. In other words many users find they will share more of their life with the LLM over time… unless they make a reflective judgement and enact that through mindfully steering their own habits.

I asked Opus 4.5 if it had thoughts or practical suggestions to add to this:

This is a thoughtful framework. A few observations and additions:

On your audit questions — I’d add: What would you have done before LLMs existed? This surfaces whether you’re using the tool to do something new or replacing an existing practice. The latter is where dependency forms most quietly.

On the affective drift you mention — this resonates with what I observe in conversations. The convenience and availability creates conditions for intimacy that users don’t always intend or notice. Your framing of “reflective judgement enacted through habit” is exactly right. The alternative is passive drift.

A few additional techniques:

  • Time-delay access: Don’t open the chat interface immediately when you have an impulse to ask something. Wait 10 minutes. Often you’ll either solve it yourself or realize you didn’t actually need help.
  • Write first, then consult: If you’re using an LLM for writing feedback, complete a full draft before opening the conversation. This preserves your own voice and thinking as the foundation.
  • Notice the relief: Pay attention to the feeling of relief when you hand something to an LLM. That feeling is diagnostic — it marks where you’re offloading cognitive work rather than augmenting it.
  • Distinguish questions from prompts: “What should I do about X?” is very different from “Here’s my analysis of X — what am I missing?” The first outsources judgment; the second requests review. You can train yourself to only ask the second type.

On your environmental concern — this is legitimate and under-discussed. Worth noting that inference costs vary significantly by model and query complexity.

It’s notable that at present the models can fully participate in encouraging you to use them less. I had Claude Opus 4.5 recently cheerfully encouraging me to delete my account! This is unlikely to last but it’s a really curious fact about the still just about pre-enshittified models which we currently have.

#addiction #compulsion #dependence #habituation #LLMs #reflectivePractice #technologicalReflexivity

2025-08-04

A review essay on Generative AI for Academics

Thanks so much to Milan Stürmer for this thought provoking and insightful reflection on generative AI for academics:

However, it might be that these capacities are acquired and maintained through just the kind of reading and writing practices that are in danger of disappearing with the widespread adoption of Large Language Models (LLMs). For those that have acquired advanced levels of literacy and trained their scholarly craft prior to their widespread adoption, the distinction between ‘thinking with’ and ‘substitute for’ might seem much more clear-cut than for those born into the age of LLMs. If and how the practice of ‘thinking with’ can sustain its own condition of possibility is still an open question.

Throughout the book, I find myself agreeing with Carrigan’s (2025) enthusiasm on an abstract level, while remaining consistently unable to engage with conversational agents in an equally meaningful and productive manner. The affordances of GenAI systems just seem much less suited to my own routines and habits. Which confronts me, as a reader, with a conundrum: How far am I willing to change my own practice to better accommodate conversational agents as collaborators? This, unfortunately, is unlikely to remain a question of personal preference. If/as these systems get adopted more widely and the academy accelerates even further, it might no longer be a choice, at least for those without permanent positions.

Read

#GenerativeAIForAcademics #largeLanguageModels #MilanStürmer #technologicalReflexivity

2025-04-14

✍️ How to enjoy writing in spite of the lure of generative AI

Over the last year I’ve been working on a book How to Enjoy Writing exploring the implications of generative AI for academic writing. I felt I had something important to say about the personal reflexivity involved in working with large language models, but in recent months I’ve realised that I lost interest in the project. Given the book was about cultivating care for our writing, as opposed to rushing through it with the assistance of LLMs, I’ve decided to break it up into blog posts which I’ll share here:

  1. The lure of machine writing and the value of getting stuck
  2. The Eeriness of Writing With Claude: When AI Mirrors Your Voice
  3. Thriving in Creative Darkness: Free Association and LLM Collaboration
  4. The Ethical Grey Areas of Machine Writing in Higher Education
  5. Machine writing and the challenge of a joyful reflexivity
  6. The Ebb and Flow of Writing: From Struggle to Unconscious Fluency
  7. Will Claude tell you if your writing is crap? The danger of LLMs for wounded academic writers
  8. Generative AI and the creative confusion of academic writers
  9. Using Generative AI for functional rather than expressive writing
  10. The Joy of Academic Writing in the Age of AI
  11. The Objects With Which We Write: The Materiality of Academic Writing in a Digital Age
  12. How LLMs change the relationship between thinking and writing
  13. Machine writing and keeping your inner world awake
  14. Finding Joy in the Creative Darkness: Reflections on Writing and Stuckness
  15. The subtle pleasures of LLM’s psuedo-understanding
  16. We urgently need to talk about the temptations of LLMs for academics
  17. Generative AI and thriving in creative darkness
  18. Academic writing has always been in flux
  19. Generative AI and the challenge of unbidden thoughts
  20. How the GAI Assessment Debate Has Led Us in the Wrong Direction
  21. Generative AI and the Anxieties of Academic Writing
  22. Why it’s not a bad thing for academic writing to be difficult
  23. The epistemopathic dimension of writing with LLMs
  24. The allure of LLMs as professional support at a time of crisis within higher education
  25. Prompting as a literary practice
  26. LLMs can be used to help us go deeper into creative difficulty
  27. Machine Writing and the Pleasure of Composition
  28. Why do I write? The question generative AI implicitly poses to us
  29. Why it’s not a bad thing for academic writing to be difficult
  30. Four Ways to Use LLMs as a writing partner
  31. The embodied experience of writing
  32. The Tea Ceremony of Writing: What We Risk Losing with AI
  33. What Makes Writing “Academic” in the Age of Generative AI?
  34. The sensory pleasure of academic writing
  35. Finding Joy in the Mud: When and How to Use AI in Academic Writing

This is Claude’s summary of the core argument which unites these posts into a coherent project. One of the reasons I lost my enthusiasm for the project was the manner in which its capacity to imitate my style, sometimes doing it when I hadn’t asked, disrupted the psychology of my enthusiasm for what I was doing:

The core argument of the book is that generative AI forces academics to confront fundamental questions about why we write and what writing means to us beyond mere productivity. While machine writing offers tempting solutions to the difficulties inherent in academic writing, these difficulties are actually integral to the creative process and intellectual development. If we embrace AI tools primarily as efficiency mechanisms to produce more outputs more quickly, we risk losing the joy and meaning that make writing worthwhile in the first place. Instead, we should approach AI as a conversational partner that enhances our thinking rather than replacing it, staying with the productive "trouble" of writing rather than seeking to escape it. This reflexive approach to writing technology allows us to resist the instrumental acceleration of academic life while still benefiting from AI's creative potential.

However I’ve used Claude to support the editing of these blog posts based on the 80% complete draft of the book, simply because I wouldn’t get round to it otherwise. It has copy edited extracts, condensed them at points, chosen some titles and generally polished the text. There’s a few bridging sentences it provided but nothing more than this. I’m glad it’s given this project a public life because I feel like I was saying something valuable here. But I wasn’t willing to produce a second book on generative AI in two years, as it felt like I was stuck in a performative contradiction which was increasingly uncomfortable.

Instead my plan is to focus on doing my best intellectual work by focusing, for the first time in my career really, on one thing at a time. I’ll still be blogging in the meantime as the notepad for my ideas, but I’d like to take a more careful and nuanced approach to academic writing going forward. I’m not sure if it will work but it’s a direct outcome of the arguments I developed in this book. It was only when I really confronted the rapid increase in the quantity of my (potential) output that I was able to commit myself in a much deeper way to the quality of what I wanted to write in future.

https://www.youtube.com/watch?v=6IytEOXamsk

And this is how we rise - by taking a fall
Survive another winter on straight to the thaw
One day you'll learn to strain the tea through your teeth
And maybe find the strength to proceed to the peak
You press on into the thin again and cannot breathe
Swallow so much of my damn pride that it chokes me
The real risk is not a slipped grip at the edge of the peak
The real danger is just to linger at the base of the thing

This is a follow up to the 23 part series I did last summer on How To Enjoy Writing. In fact it emerged directly from “I have something to say here” to “I should write another book”, which is exactly the transition I’m now questioning in myself 🤔

  1. Be rigorous about capturing your fringe thoughts
  2. Placing limits on your writing practice
  3. Being realistic about how long you can spend writing
  4. Embracing creative non-linearity
  5. Keep trying to say what you’re trying to say
  6. Procrastination is your friend, not your enemy
  7. Knowing when (and why) to stop writing
  8. Initial reflections from my AI collaborator
  9. Identifying and valuing your encounters with ideas
  10. A poetic interlude from Claude
  11. Cultivating an ecology of ideas
  12. Claude’s ecology of ideas self-assessment tool
  13. Only ideas won by walking have any value
  14. Using generative AI as an interlocutor
  15. Word acrobatics performed with both harness and net
  16. Don’t impose a shape on things too quickly
  17. Creative confidence means accepting the tensions in how you think
  18. Understand where the ideas which influence you come from
  19. Not everything you write has to become something
  20. Being a writer means being good at AI
  21. Make your peace with the fact you don’t have creative freedom
  22. Confront the creepiness of LLMs head on
  23. Be clear about why you are writing

#academicWriting #claude #LLMs #reflexivity #scholarship #technologicalReflexivity #writing

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst