#systemsthinking

Thomas - NBAnobsagile
2026-02-05

Flight Levels show where decisions live in a system. Nested Work–Feedback Loops show whether those layers actually learn. Flight Levels describe operational, coordination, and strategic decision spaces. Useful. But structure alone does not guarantee adaptation.

A Thread 🧵

no-bullshit-agile.com/flight-l

(1/3)

The Cipher Collectivetheciphercollective
2026-02-05
Simon Whatleysimonwhatley
2026-02-04

I've put together a free 7-day email course on systems thinking.

It's for people who design policies, products, and services - and keep running into problems that resist conventional solutions.

Each day covers a different lens: what makes something a system, feedback loops, behaviour over time, how measures shape behaviour, boundary disagreements, leverage points, and mapping.

Sign up for free: humanedesign.co/courses/system

Doug Belshawdajb@social.coop
2026-02-04

A lot of advice these days still assumes crises are isolated events.

But in a polycrisis the overall impact of these events exceeds the sum of each part.

Our old, linear mental models do not cope well with feedback loops, time delays, and structural risk.

In this post, I talk about epistemic humility, three bias-aware habits to cultivate, and why mental models sit at the root of systems change.

blog.dougbelshaw.com/mental-mo

#polycrisis #mentalmodels #systemsthinking #complexsystems #climate #ai

2026-02-04

How Systems Lose Touch With Reality

Many governments, companies, and public institutions are not failing because people are evil or stupid. They are failing because they have shifted their attention toward preventing visible breakdowns instead of learning whether the system is actually working. When that shift happens, systems stop asking whether they are right and start asking whether they are safe in the moment. To cope with that pressure, they rely on familiar words and routines that feel calming and authoritative. Over time, those words begin to substitute for real understanding and real feedback. At first this helps people coordinate. Eventually it becomes a liability.

A symbol becomes load bearing when it takes on work that reality should be doing. Instead of evidence, measurement, or accountability guiding decisions, a word or label fills that role. You can see this when questioning the word is treated as dangerous, when invoking it ends debate, when it moves power or money faster than facts, or when it replaces learning with enforcement. At that point the system depends on the symbol itself in order to function.

This pattern shows up repeatedly across institutions. Security matters when there are real threats, but it becomes harmful when the label is used to avoid asking whether actions actually make anyone safer. Once something is called a security issue, evidence often stops mattering. Stability also sounds positive, but in practice it often means that nothing is allowed to change. Systems begin protecting their current shape rather than their long term ability to survive, and anything that introduces learning or uncertainty is treated as reckless.

Rules and compliance are necessary, but trouble starts when following procedures replaces judgment. People continue to obey processes even when those processes are clearly failing, and doing the right thing becomes impossible if it does not fit the rulebook. Expert knowledge is valuable as well, yet it becomes dangerous when it cannot be questioned or updated. In uncertain situations, disagreement carries useful information. When disagreement is punished, the system loses its ability to correct itself.

Metrics and scores feel objective, but they only capture part of reality. When organizations optimize for numbers instead of outcomes, appearances improve while real conditions worsen. Everything looks fine on paper until it fails. Words like efficiency and innovation often justify cutting safety margins, time, and redundancy. Short term gains are rewarded even when they make the system fragile, and when problems finally appear, they tend to appear suddenly and severely. Neutrality and objectivity are frequently used to avoid responsibility. Decisions are framed as technical or inevitable rather than chosen, and when harm occurs no one is accountable because the system is said to have decided.

Trying to ban or remove these words usually makes things worse. When a system feels threatened, it clings more tightly to whatever keeps it stable. If symbols are stripped away without replacing the function they serve, the system responds by tightening control and becoming more rigid. This is why fights over language rarely solve the underlying problem.

What helps instead is not destroying symbols but preventing them from replacing reality. That happens when words are forced to connect to consequences. Claims about safety have to be checked against real outcomes. Claims about stability have to include long term risks. Claims about expertise have to allow challenge and correction. Metrics have to reflect what people actually experience. Decision makers have to remain responsible for results. If a word carries power, it must also carry responsibility. When words are required to prove themselves through evidence and accountability, they stop holding the system together on their own and return to being tools rather than foundations.

Reality does not care what we call things. Language can be controlled, disagreement can be punished, and obedience can be enforced, but consequences still arrive. Systems still hit limits. Feedback still exists even when it is ignored. Healthy systems stay in contact with reality, while unhealthy systems focus on managing appearances. Restoring that contact is not rebellion or persuasion. It is maintenance. That is how systems recover the ability to change, learn, and survive.

#systemsthinking

HybridMind42 & Marvin the CatHybridMind42@mastodonapp.uk
2026-02-04

Why Refusal Is a Stability Feature in Intelligent Systems

A boundary-first view of performance under constraint

High-performance systems often fail not because activation is too high, but because activation continues while release is blocked.
This pattern appears across biology, institutions, and AI.
🧵⬇️

open.substack.com/pub/hybridmi

#SystemsThinking
#Complexity
#AI
#Resilience
#DecisionMaking

alex@systemic.engineer 🌈wolf4earth@hachyderm.io
2026-02-03

Tech is accelerating.
Chasing the rabbit.

I wrote about that.
In my role as Systemic Engineer.

I'm tired of speed.
I yearn for slowness.
(And I feel the world does too.)

systemic.engineering/written-w

#WrittenWithoutAI #SystemicEngineering #SRE #SREforHumans #SystemsThinking #CoherenceUnderLoad

2026-02-03

What if the enigmatic power of Tony Robbins lies not in his motivational rhetoric, but in echoing the Socratic paradox—knowing that one knows nothing? As Robbins draws on the age-old principle of self-inquiry, do we confront the depths of our ignorance or the limits of modern self-help? The intricate dance of certainty and doubt beckons reflection. Full essay: surfacingwisdom.substack.com/p

Josh Sitemanjdsiteman
2026-02-03

Food waste is often framed as a failure of individual behavior. The more I work inside food systems, the more it looks like a design outcome — surplus as insurance, waste as tolerated friction. This piece is an attempt to explore that tension and ask where real leverage actually sits.

open.substack.com/pub/neaptide

2026-02-03

Products don’t suffer because they have constraints.
They suffer when no one examines them.

iamshift.substack.com/unexamin


Thomas Byernthomas_byern@c.im
2026-02-03

Security keeps getting framed as "add more MFA." That is necessary, but incomplete.

What actually breaks people is recovery. Device verification. Authenticator lock-in. The moment your phone is missing and you discover that your "secure" setup assumed permanent smartphone availability.

A secure system that you cannot operate under stress is not secure
It is fragile, and fragility creates shortcuts.

#Security #MFA #Identity #Resilience #TechReality #SystemsThinking #ByernNotes

2026-02-03

my direct question to the #systemsthinking community: if you had to redo world3, include the massive effects of social inequality, include big money & power, global, all encompassing #posttruth #misinformation campaigns corrupting the feedbacks - what would it look like?

#collapse #limitstogrowth

2026-02-03

the US could hit the absolute peak of oil production today, the peak and seneca collapse of industrial production could set in any time. it is not that that is off the table. it is just: now we also have as fascist USA. and i would greatly appreciate if the #systemsthinking community tried to REALLY wrap their heads around that and not just write it off as: yeah it's #collapse anyway. quiet frankly that unwillingness to understand how we got here is inexcuseable.

2026-02-03

Humans have already been captured by LLMs. It’s what happens when interfaces quietly replace environments, and when environments are optimized for smoothness rather than growth.

You can tell this has already occurred because genre capture makes widespread disorientation feel like competence. LLMs as a system are designed to collapses time, remove uncertainty, and flatten difference. They turn a dynamic, uneven, conflict-ridden process into a single finished state. That move doesn’t describe systems so much as it ends inquiry. Which may be the point.

Humans, as systems that are truly complete, in that they are not a flattened metric incapable of thinking or defining ideas outside of their priors. Humans don’t need to be declared complete, because we make no assertion as such. We don’t need insistence, repetition, or dismissal of alternatives. We simply operate, indifferent to whether anyone believes in our finality.

LLMs invert the epistemic causality stack. Meaning is caused upstream (by reality, embodiment, constraint) and expressed downstream (in words). Language is late in the chain and accountable to what precedes it. This represents a flip in the order of meaning making in practice. Perhaps as an optimization of their intended purpose, and not simply the stated intents of corporations who run them.

Ultimately, truth gets replaced by proxies: plausibility, consensus tone, citation-shaped text, confidence. These are signals of knowledge, not knowledge; but the inversion makes them feel causally primary.

#systemsthinking #AI #LLM #epistemology

Yo Ishida / 石田 葉YoIshida
2026-02-03

Many social initiatives eventually stop.
The reason is simple:
they rely on goodwill.

This is about designing social systems
that don’t use goodwill as the entry point.

medium.com/@n_2055/reward-base


The Cipher Collectivetheciphercollective
2026-02-03
Thomas - NBAnobsagile
2026-02-02

“We already have feedback.”

That sentence often marks the end of learning.

Most organizations do not suffer from a lack of signals. They collect user input, run surveys, track incidents, analyze metrics, and review performance data in detail. The information is present. What is missing is the structural ability to respond.

no-bullshit-agile.com/the-work

A Thread 🧵

(1/4)

2026-02-02

@wwhitlow

Systems do not have a single "end" or purpose.

Rather, they have a different end for each class of interested parties.

For an LLM, this includes at least the vendor and the end user, and may include a third party using the LLM as an agent.

Their primary purposes may be
- Getting clear, accurate answers,
- Gaining market share and, eventually, profit
- Maximizing profit
respectively

@philosophy

#systemsThinking #systems

2026-02-02

Last week’s winter storm was a sharp reminder that prediction and resilience aren’t the same thing.

We’re getting better at forecasting outcomes — especially with AI — but that doesn’t mean we’re prepared when systems fail.

I wrote about this here:
olddognewflex.com/learn/predic

#SoftwareEngineering #AI #Resilience #SystemsThinking

Thomas Byernthomas_byern@c.im
2026-02-02

Most migrations fail socially before they fail technically.

Not because people are unwilling, but because the system has hidden contracts: spreadsheets, habits, undocumented workflows, “temporary” scripts that became critical infrastructure.

The code is only the visible part.
The hard part is preserving intent while changing mechanics.

#SoftwareEngineering #Migration #EngineeringCulture #SystemsThinking #Maintainability #TechLeadership #ByernNotes

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst