Study: Dolly Parton is the Greatest
Study: Dolly Parton is the Greatest
Kim Young Sung And Oh Dong Min Are Brothers Whose Skepticism Turns To Fear In Horror Film “Whispering Water” #Brothers #Dong #Fear #Film #Horror #Kim #Min #Skepticism #Sung #Turns #Water #Whispering #Young
https://kpopnewshub.com/kim-young-sung-and-oh-dong-min-are-brothers-whose-skepticism-turns-to-fear-in-horror-film-whispering-water/?fsp_sid=15224
Got a question for you all: when’s the last time an album just absolutely, from out of nowhere, blindsided you and kept you enthralled for the entirety of its length on the first play? I don’t mean from your favorite bands (though that’s pretty damn cool, too), but from a completely out-of-left-field pick in the promo sump?1 Georgian funeral directors Ennui have been toughing it out with their craft since 2012’s Mze Ukunisa, with the then-duo working with what they had available. Now, the duo is a full-fledged band, and on their fifth album, Qroba, they hit graveyard paydirt.
From the very first moment the keyboards swell in opener “Antinatalism,” you know you’re in for some fun(erally dismal) times. Creeping at the pace of a drugged-out snail in the Antarctic region, “Antinatalism” brilliantly glistens while also dragging you through the icy snow with its crashing (and crushing) riffs and methodical drumming. When founder David Unsaved growls, it’s with the force of the bear from Annihilation after freshly devouring Daniel Neagoe (Eye of Solitude and a fuckton of other bands), further throwing the listener into a pit of existential despair and woe. Most impressively, at over 10 minutes, the time flies by effortlessly, due to its ability to draw you in and keep you enthralled for the entire length.
The other four songs on this 62-minute behemoth hold up pretty damn well themselves. The album centerpiece “Decima,” which features mournful melodies from a panduri (three-stringed lute) that accentuate the sorrow midway into the song, giving the album even more breathing room to expand and envelop you in its snare. “Becoming Void” sounds like Canto III on steroids, picking a fight with Turn Loose the Swans-era My Dying Bride for lunch money, and again feels like a six-minute song at over 15 minutes due to how well-crafted it is. None of the songs overstay their welcome, but they still feel like soulful journeys to a mournful end.
Produced by Greg Chandler (Esoteric), Qroba does an amazing job of giving breathing space while also smothering the listener. The drums feel cavernous and monstrous, plodding with intent. The scant few leads2 amplify the dour vibes a thousandfold, making damn sure there’s no light escaping this realm. If there was a nitpick to be had, this is the epitome of mood music. You’re not going to grab this on a warm, sunny day out in the sun with some lemonade and a book to read.3 Qroba is solemn, reflective music if there ever was some.
And this fits in quite well with other solemn, reflection-time music from the likes of Skepticism and Shape of Despair. Qroba came at a time when I didn’t know I needed it, with all the chaos going on around the world as well as in my own circle, but I’m glad it did, as this has been playing non-stop since I first laid ears upon it. If you’ve never taken to funeral doom ever, Ennui won’t change your mind, as this is sorrow personified and exponentially amplified. However, if you need a severe purging, welcome to your new favorite band.
Rating: 4.0/5.0
DR: 7 | Format Reviewed: 319 kbps mp3
Label: Meuse Music Records
Websites: Bandcamp | Facebook
Releases Worldwide: February 27th, 2026
I can't really understand 1) believing in fundamentalist Christianity *and* 2) liking and wanting to believe in fundamentalist Christianity.
Growing up until about age 20, I was only in the former: I really resented my belief system; I only believed it because I thought at the time "facts and logic" supported it; it was a reluctant belief.
It felt so oppresssive to me that when I finally deconverted, I felt an incredible sense of euphoria for like a week. #religion #christianity #skepticism
I added a paragraph about the boy who cried wolf to one of my essays. This brings up some really deep epistemic issues that I wish I had time to explore. It is very easy to reject something true because the source is unreliable or people who make that type of claim are unreliable, or to accept something false because the source seems trustworthy and knowledgeable.
https://www.bookandsword.com/2025/01/11/knowing-things-is-hard/#cried-wolf #philosophy #epistemology #skepticism #criticalThinking
Two weeks for family, and here we are with more context for a 32k limit. Looks like the AI Agent in my head wasn't ready for the changes done in the life, or maybe I'm getting fatigued, either ways...
If you felt that the tool isn't "tooling", give a read:
https://jeferson.me/blog/2026/02/05/two-weeks-of-ai
#Agent #AI #Bash #Blog #Bubble #Claude #China #DnD #Git #MetricSystem #OpenCode #Security #Skepticism #Technology
“The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt.”
“The Triumph of Stupidity”, Bertrand Russell (1933)
I always loved this thought.
I saw this on Mastodon and almost had a stroke.
@davidgerard wrote:
“Most of the AI coding claims are conveniently nondisprovable. What studies there are show it not helping coding at all, or making it worse
But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.
These guys had one good experience with the bot, they got one-shotted, and now if you say “perhaps the bot is not all that” they act like you’re trying to take their cocaine away.”
First, the term is falsifiable, and proving propositions about algorithms (i.e., code) is part of what I do for a living. Mathematically human-written code and AI-written code can be tested, which means you can falsify propositions about them. You would test them the same way.
There is no intrinsic mathematical distinction between code written by a person and code produced by an AI system. In both cases, the result is a formal program made of logic and structure. In principle, the same testing techniques can be applied to each. If it were really nondisprovable, you could not test to see what is generated by a human and what is generated by AI. But you can test it. Studies have found that AI-generated code tends to exhibit a higher frequency of certain types of defects. So, reviewers and testers know what logic flaws and security weaknesses to look for. This would not be the case if it were nondisprovable.
You can study this from datasets where the source of the code is known. You can use open-source pull requests identified as AI-assisted versus those written without such tools. You then evaluate both groups using the same industry-standard analysis tools: static analyzers, complexity metrics, security scanners, and defect classification systems. These tools flag bugs, vulnerabilities, performance issues, and maintainability concerns. They do so in a consistent way across samples.
A widely cited analysis of 470 real pull requests reported that AI-generated contributions contained roughly 1.7 times as many issues on average as human-written ones. The difference included a higher number of critical and major defects. It also included more logic and security-related problems. Because these findings rely on standard measurement tools — counting defects, grading severity, and comparing issue rates — the results are grounded in observable data. Again, I am making a point here. It’s testable and therefore disproveable.
This is a good paper that goes into it:
In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.
https://arxiv.org/abs/2508.21634
The big problem in discussions about AI in programming is the either-or thinking, when it’s not about using it everywhere or banning it entirely. Tools like AI have specific strengths and weaknesses. Saying ‘never’ or ‘always’ oversimplifies the issue and turns the narrative into propaganda that creates moral panic or shills AI. It’s a bit like saying you shouldn’t use a hammer just because it’s not good for brushing your teeth.
AI tends to produce code that’s simple, often a bit repetitive, and very verbose. It’s usually pretty easy to read and tweak. This helps with long-term maintenance. But AI doesn’t reason about code the way an experienced developer does. It makes mistakes that a human wouldn’t, potentially introducing security flaws. That doesn’t mean we shouldn’t use for where it works well, which is not everywhere.
AI works well for certain tasks, especially when the scope is narrow and the risk is low. Examples include generating boilerplate code, internal utilities, or prototypes. In these cases, the tradeoff is manageable. However, it’s not suitable for critical code like kernels, operating systems, compilers, or cryptographic libraries. A small mistake memory safety or privilege separation can lead to major failures. Problems with synchronization, pointer management, or access control can cause major problems, too.
Other areas where AI should not be used include memory allocation handling, scheduling, process isolation, or device drivers. A lot of that depends on implicit assumptions in the system’s architecture. Generative models don’t grasp these nuances. Instead of carefully considering the design, AI tends to replicate code patterns that seem statistically likely, doing so without understanding the purpose behind them.
Yes, I’m aware that Microsoft is using AI to write code everywhere I said it should not be used. That is the problem. However, political pundits, lobbyists, and anti-tech talking heads are discussing something they have no understanding of and aren’t specifying what the problem actually is. This means they can’t possibly lead grassroots initiatives into actual laws that specify where AI should not be used, which is why we have this weird astroturfing bullshit.
They’re taking advantage of the reaction to Microsoft using AI-generated code where it shouldn’t be used to argue that AI shouldn’t be used anywhere at all in any generative context. AI is useful for tasks like writing documentation, generating tests, suggesting code improvements, or brainstorming alternative approaches. These ideas should then be thoroughly vetted by human developers.
Something I’ve started to notice about a lot of the content on social media platforms is that most of the posts people are liking, sharing, and memetically mutating—and then spreading virally—usually don’t include any citations, sources, or receipts. It’s often just some out-of-context screenshot with no reference link or actual sources.
A lot of the anti-AI content is not genuine critique. It’s often misinformation, but people who hate AI don’t question it or ask for sources because it aligns with their biases. The propaganda on social media has gotten so bad that anything other than heavily curated and vetted feeds is pretty much useless, and it’s filled with all sorts of memetic contagions with nasty hooks that are optimized for you algorithmically. I am at the point where I will disregard anything that is not followed up with a source. Period. It is all optimized to persuade, coerce, or piss you off. I am only writing about this because this I’m actually able to contribute genuine information about the topic.
That they said symbolic propositions written by AI agents (i.e., code) are non-disprovable because they were written by AI boggles my mind. It’s like saying that an article written in English by AI is not English because AI generated it. It might be a bad piece of text, but it’s syntactically, semantically, and grammatically English.
Basically, any string of data can be represented in a base-2 system, where it can be interpreted as bits (0s and 1s). Those bits can be used as the basis for symbolic reasoning. In formal propositional logic, a proposition is a sequence of symbols constructed according to strict syntax rules (atomic variables plus logical connectives). Under a given semantics, it is assigned exactly one truth value (true or false) in a two-valued logic system.
They are essentially saying that code written by AI is not binary, isn’t symbolically logical at all, and cannot be evaluated as true or false by implying it is nondisproveable. At the lowest level, compiled code consists of binary machine instructions that a processor executes. At higher levels, source code is written in symbolic syntax that humans and tools use to express logic and structure. You can also translate parts of code into formal logic expressions. For example, conditions and assertions in a program can be modeled as Boolean formulas. Tools like SAT/SMT solvers or symbolic execution engines check those formulas for satisfiability or correctness. It blows my mind how confidently people talk about things they do not understand.
Furthermore that they don’t realize the projection is wild to me.
@davidgerard wrote:
“But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.”
They are presenting a story—i.e., saying that the studies are not disprovable—and accusing computer scientists of using anecdotal evidence without actually providing evidence to support this, while expecting people to take it prima facie. You’re doing what you are accusing others of doing.
It comes down to this: they feel that people ought not to use AI, so they are tacitly committed to a future in which people do not use AI. For example, a major argument against AI is the damage it is doing to resources, which is driving up the prices of computer components, as well as the ecological harm it causes. They feel justified in lying and misinforming others if it achieves the outcome they want—people not using AI because it is bad for the environment. That is a very strong point, but most people don’t care about that, which is why they lie about things people would care about.
It’s corrupt. And what’s really scary is that people don’t recognize when they are part of corruption or a corrupt conspiracy to misinform. Well, they recognize it when they see the other side doing it, that is. No one is more dangerous than people who feel righteous in what they are doing.
It’s wild to me that the idea that if you cannot persuade someone, it is okay to bully, coerce, harass them, or spread misinformation to get what you want—because your side is right—has become so normalized on the Internet that people can’t see why it is problematic.
That people think it is okay to hurt others to get them to agree is the most disturbing part of all of this. People have become so hateful. That is a large reason why I don’t interact with people on social media, really consume things from social media, or respond on social media and am writing a blog post about it instead of engaging with who prompted it.
Users Are Too Dependent on Centralized Techno-Fascist Corporate Structure to Ever Leave Discord
I’m watching people scatter into countless real-time chat alternatives to Discord after Discord started pulling the age-verification and age-gating card.
It’s very frustrating because people are entirely missing the point of a community and how social networks work. Real-time platforms and social media networks only work well when a large number of people share the same space at the same time. If everyone creates separate servers or competing apps, the result is fragmentation that makes it unviable.
One reason why Bluesky became so successful is the invitation and starter-pack move. It essentially allowed people to move collectively as cliques. Bluesky used invitations and starter packs to move groups of friends together. This kept communities intact. Moving as cliques preserves network structure, whereas random scattering does not. People aren’t do not seem to intend to move as cliques or subgraphs of networks off of Discord. And the whole reason people were on Discord was to host their communities, so an alternative becomes pointless if your community doesn’t remain intact.
Instead of an active, strongly connected, possibly distributed network, you get dozens of small pockets. I am referring to a potential distributed network rather than a single centralized platform, because Matrix is an example of a decentralized chat protocol. Not all alternatives have to be centralized like Discord. Technically, many older chat protocols, such as XMPP and IRC, are examples of federated real-time synchronous messaging. They allowed communication between users on different, independently operated servers. Federation means that multiple servers can interconnect so that users from separate networks can exchange messages with one another seamlessly.
Decentralized alternatives would not be a problem if people moved to the same distributed network as cohesive groups. However, what I am seeing is that people move in disconnected and stochastic ways to entirely separate distributed networks, so communities are not kept intact. For example, when people move to XMPP servers or Matrix servers, it bifurcates and disconnects social networks. Notice I said XMPP or Matrix, which logically means people are on Matrix but not XMPP, or they are on XMPP but not Matrix. That implies a person would need to be on both Matrix and XMPP to speak to their original community from Discord if it split down the middle. To synchronize conversations in chats, there would need to be a bridge. It’s a pretty complicated solution.
The likely outcome is that people will remain on the dominant platform because of its scale and structure. The deeper irony is that while people may want independence from corporate platforms, they often struggle to organize effectively without the centralized structure those platforms provide. They’ve become so dependent on corporate structures to support their communities that they have no clue how to organize their own social networks in a sustainable way.
I’ve always been an internet nerd, but most of my social life has been offline. I view my interactions with the social app layer of the internet as a game, so losing that domain of the Internet is not devastating to me.
I’ll give you an example. This is a WordPress site. You hear this insincere nostalgia from Millennials and Gen X for a simulacrum that never was, especially concerning forums. Check this out: when you go into the plugin installation section of WordPress, this is on the second row you see:
That means any WordPress site has the capability to host a forum. They’re nostalgic for a setup where you can use a simple install script on any hosting service to install WordPress. After that, you can then just add a plugin to turn it into a forum. Hell, they can do this on WordPress.com if they don’t want to self-host.
You can make a forum, but no one will use it because they’d rather use a centralized platform like Reddit. Users have become so dependent on corporations to structure and organize communities that they can’t do it themselves. It’s sort of like the cognitive debt that accrues when people outsource their thinking to AI.
The issue is not that forums are hard to host or create; rather, the issue is that people have become so dependent on centralized corporate structures that they can’t maintain or organize their own communities, which is why everyone ends up on Reddit or Discord. A reason I keep hearing for why people don’t want to leave Discord is that it’s hard to recreate the community structure that Discord’s features provide. They claim that they want independence from corporate platforms, but rely on the centralized structure those platforms provide to function socially.
People say they want decentralized freedom, but in practice they depend on centralized platforms to maintain social cohesion. Stochastically scattering to the digital winds of the noosphere destroys the very communities they’re trying to preserve.
New Yorker & Daily Show Platform a Kook
"33rd Hanteo Music Awards" Daesang Winners Met With Skepticism And Criticism #33rd #Awards #Criticism #Daesang #Hanteo #Met #music #Skepticism #Winners
https://kpopnewshub.com/33rd-hanteo-music-awards-daesang-winners-met-with-skepticism-and-criticism/?fsp_sid=12912
The 20th century’s most horrific evils were carried out by societies soaked in religious tradition, proving religion is no safeguard against wrongdoing.
The Power of Not Knowing: Embracing Uncertainty and Recognizing the Illusion of Knowledge
In a world obsessed with certainty, expertise, and constant information, it can feel uncomfortable, even shameful, to admit that we do not know something. From the moment we enter school, we are conditioned to seek answers, to value knowledge as an indicator of intelligence, and to fear being wrong. Yet, paradoxically, the truth is that no one, not even the most accomplished scholars, scientists, or thought leaders, knows everything. Human knowledge, though vast and impressive, is finite, […]A Thing I Miss About Old-School Forums Was It Was Harder to Bull Shit People
I don’t miss old-school forums. However, one of the very few norms I do miss about them is that you were required to support your argument with sources.
On present social media, sources don’t matter. Evidence doesn’t matter. It only matters if enough people with a large enough pool of influence say it so frequently that it becomes the norm and the consensus. It has become out of the norm for anyone to ask for sources. They see massive accounts with massive influence on a platform saying it in a trending section, their feeds, their For You pages, etc., and they adopt it. No questions asked. And if there is dissent, they are promptly beaten down, because the consensus is what these authority figures say without evidence. That was not the norm on bulletin board forums. Not at all.
It doesn’t matter if it is Reddit, Bluesky, or Mastodon—scrolling through the trending topics section prompted this—people will write out an opinion they know others will likely disagree with, prima facie, as if it is well supported. As in, it has zero evidence. Then people collectively affirm that unsupported argument until it becomes a consensus. Then, when it drums up engagement, they flesh out the argument as if it is well supported.
While you’re talking to real people on the fediverse and maybe Reddit (albeit it might be a real person who had an LLM draft a response), I know for a fact from software developers that AI agents simulating users on Bluesky are running around, and they are so convincing you cannot tell it is an AI agent. I brought up that point because old-school astroturfing would work by creating a call-and-response dynamic with a person planted in the audience to artificially create a dialogue where a narrative forms and spreads.
You see it all the time on TikTok, where a TikTok influencer responds to a random comment. Now that AI agents are reaching the point where their simulations of people are accurate predictions of people, they can fool human intuition. Hell, Bluesky has an entire section of their documentation dedicated to setting up bots.
https://docs.bsky.app/docs/starter-templates/bots
Here is the starter script to make bots on Bluesky:
This folder contains a starter template for creating a bot on Bluesky. In this example, the bot posts a smiley emoji on an automated schedule once every three hours.
https://github.com/bluesky-social/cookbook/tree/main/ts-bot
And, of course, you create the disease so you can sell the cure. And, of course, after flooding networks with bots, you have bot detectors
Bluesky Bot Detector
Check if an account is likely to be a bot. Our algorithm analyzes multiple factors including posting patterns, profile characteristics, and network behavior. It is still a beta, take the results with a grain of salt…
https://bskycheck.com/botcheck.php
Yeah, the fediverse has automated accounts, too. But they are not LLM agents. Examples of this on Bluesky are Letta AI Social Agent, Bluesky Automation Agent, Telegram to Bluesky Agent, and LLM Bot Framework. It absolutely blows my mind how much anti-AI discourse on Bluesky is actively spread by AI itself, where it collects data from how real people interact with it, spreads it, and internalizes narratives from it. There are two versions: good ones, and bad ones. The bad ones exist so that people make incorrect associations and underestimate their ability to tell what is and isn’t an agent. Since the fediverse and Bluesky are tightly correlated with the same memeplex, it spreads here memetically.
They create the impression of a consensus that is then adopted, such that it becomes the consensus. This would not have been very effective within the culture of old-school bulletin boards, because everyone’s default mode was skepticism—they would ask for corroborative sources, read the sources, and even explain to whoever posted them why it was a bad source.
What prompted this was an arbitrary heuristic about the number of people you should have within a collective, coalition, etc., with zero proof and zero evidence on Mastodon.
I’m not in high school anymore. I am not in college anymore. Because of things like this, I have contempt for your average Internet user and am not interested in creating or running communities, personally. You guys are insufferable as fellow users; I wouldn’t want to be in charge of managing your social interactions. I don’t want to admin an online community in any shape, way, or form—especially for free. I have a husband and a kid—although I don’t have custody of my son; that is a long story. So, no, I don’t miss forums, because as someone approaching 40 (I will be 38 this year), I am in a different space in life.
In case you’re wondering what the hell I am talking about, whenever I address misinformation outside of narrow contexts, I don’t explicitly mention it, because that actually creates a memetic payload. One of my biggest frustrations about people who argue online against misinformation is that they spread it. Think of it like this: when you requote something, quote it in the text, or share screenshots, your response is the wrapper, and we can view the misinformation as the payload. It is like hiding poison in food.
When you argue against misinformation and you don’t abstract it to quarantine the harmful memetic aspects, your rebuttal is a wrapper. A person who is susceptible to the memetic contagion will disregard your argument—the wrapper—and consume the misinformation.
I do not miss forums; however, I do miss cultural norms where people actually cared about facts. Influence is a form of power, and when you seek power where someone else loses, that is called domination. I miss when people cared about genuine collaboration instead of dominating and abusing each other.
I am keeping tabs on the tech discourse here concerning features people want. There’s an intersection and a convergence between Bluesky’s transparency report and what people are screaming about here: harassment. It is sort of like how people were giving Bluesky a hard time about user moderation tools being used to block Nazis instead of suspending them. The issue is that it was indicative of a culture tolerant of Nazis. The fact that the tools existed indicated the issue.
People want tools to handle harassment on the fediverse; however, that implies a hostile global forum culture. That’s the problem. The social layer of the Internet is connected to hostile, antisocial memeplexes.
That’s also a reason why I wouldn’t ever run a community now. Hostility is endemic to Internet cultures.
The major issue I have with how most content online is written, whether on blog posts or social media, is that it is written in a persuasive, not expository, rhetorical style. If your intention is to inform, you use exposition.
However, they’re not trying to inform you; rather, they are framing and presenting it in a way to influence you. Most content is not written in a way where you are presented with data that you interpret, evaluate, and use to come to your own conclusions; rather, it is presented and framed in a particular way they want you to interpret it.
If I feel pressure to go in one direction or the other, I will stop reading. They don’t understand that you can objectively describe knowledge without framing it normatively. It comes across as, “Don’t believe your lying eyes and don’t trust your own thoughts, experiences, values, beliefs, and reactions — trust us instead.” I added this section while literally trying to read a blog post about protocols linked from the fediverse. The entire thing was couched persuasively. It was not a technical description of evaluation; rather, it was a persuasive post implicitly filled with normative prescriptions.
I am Not Migrating Back To IRC
So, with this whole Discord surveillance age verification situation, I am seeing lots of talk on every social media platform, including Discord ironically, about moving back to IRC. I am not going back to IRC. Though, I do not really have a valid interest in it, because I use Discord just to monitor occult cults and extremist groups, lol. So, it is not like I would be really displaced, anyway. I would just need to figure out a new exploit or hack.
I’m a Computational Biologist, so I am half in Biology and half in software engineering. I tend to look at technology and evolution as analogous to one another. Software, dynamical systems, and evolution are analogous. Evolution functions like a survivorship process. Lineages that leave more descendants become more represented over time where drift, mutation, recombination, and migration all perturb the system. It’s a stochastic dynamical system.
Speciation is a change under isolation. Once gene flow is cut off, divergence accumulates. Eventually, compatibility breaks, creating a divergence. A fork. You can think of it in a version control system like this: you stop merging upstream commits. You accumulate independent changes, and at some point the codebases are no longer interchangeable.
With evolution, even when traits resemble ancestral forms, they arise through new mutations in the current system. It’s a forward, not backward, branch. So from an evolutionary and software engineering perspective, progress happens by forking and optimizing what exists. That means you will not get progress by trying to migrate everyone back to legacy infrastructure.
I’m not migrating to IRC because it is not progress or evolution. It is technologically regressive. Regressive attitudes are why we are essentially in a weird, industrial, cyberpunk version of the 1930s right now. Hoping old-school forums make a comeback and that everyone migrates back to IRC is a technological and societal regression. It’s the same nostalgia-driven impulse that MAGA rides on. It’s not evolutionary or progressive. It’s regressive and backward. So that triggers all the alarm bells in my head, because this is sort of how we got here. Donald Trump rode in on a regressive platform of nostalgia and populism. Roughly 10–12% of Bernie Sanders supporters in 2016 voted for Donald Trump under that wave.
Trump’s 2016 rise was driven significantly by populist themes such as critiques of establishment politics, nationalism, economic resentment, and appeals to voters dissatisfied with the status quo in both parties. Anyone paying attention in 2026 should see the cycle repeating itself. This weird form of technological regression is a techno-populist version of it. America never learns is lesson, does it?
Currently, I am on Matrix, albeit I use it more or less for bridge and puppeteer bot purposes. To me, it’s like going back to using muskets when everyone else is using AK-47s. The solution is not to make more regressive pieces of technology. They are least effective when you are essentially in a guerrilla, stochastic war with your own fascist government.
So, I had an experience with an influencer account on the fediverse: @FediTips
People believe they are being informed when they are being influenced. That account is litterally publishing what are technically definitionally manifestos.
That emphasized why I don’t argue online. There is an interesting property about facts: factual ontological propositions about something will converge. That is a fancy way of saying that if something is factual, it would be corroborated. Accuracy is not the only important thing; precision is, too. The more corroborative aspects that converge on that claim, the more precise it is. That is why in science, replication and a lot of measurements are so important.
When we talk about online conversations, this is important because if I say something is a fact, and there is a source I got it from, I should cite where I got it. If I or other sources are citing that or a different source saying something, it at least makes the statement more precise, albeit not necessarily more accurate. Accuracy means is a close, approximate representation whereas precision is something is consistent. If you hit the bulls eye once but never again, that is accurate but imprecise. If you never hit the bulls eye but always hit the same spot, that is precise. You can be inaccurate yet precise.
That means if something makes a claim, a source exists, and you can check that claim by visiting the source. If there are multiple sources making the same claim, it is a precise claim.The issue I have with arguments online, for example, endless arguments on Mastodon about Bluesky, is that the sources backing whatever facts I would present are accessible. So, if someone makes a statement about the AT protocol and you say that is wrong, you can look at the documentation, point this out, and say it is wrong because x, y, and z.
That brings me to the fuckery of today:
@FediTips said to me today:
No, you cannot run your own server on AT. Bluesky have made it virtually impossible to set up independent infrastructure. You can store data, but the connections to others run through Bluesky corporation’s infrastructure which they control.
I shit on Bluesky all day, every day, so I’m not a Bluesky stan. But all you have to do is think about why this doesn’t make sense. They are essentially saying that you cannot have a server running a Bluesky PDS that isn’t owned by Bluesky. One, that is not how servers work. A protocol is a way for devices to talk to one another and network.
A network protocol is a formal specification that defines how systems interoperate. It establishes message schemas, authentication mechanisms, transport methods, state transitions, and other rules governing communication between nodes. If multiple servers implement the same protocol specification correctly, they can exchange data and participate in the same network. At the protocol layer, interoperability is determined by adherence to the specification, not by who owns or operates a given server.
Protocol compliance does not inherently guarantee open or permissionless participation. In the architecture of protocols, operators impose constraints through licensing terms, cryptographic trust roots, certificate authorities, service discovery mechanisms, federation allowlists, or other gatekeeping controls. Protocols enable independent servers to communicate; however, it does not logically follow that any compliant server must be accepted into the broader network without additional policy or governance constraints. This is the point they are making.
However, they are conflating who owns the server with what can be allowed in the network. But what they argue for the AT protocol always applies to the ActivityPub protocol. They specifically said, “No, you cannot run your own server on AT. Bluesky has made it virtually impossible to set up independent infrastructure.”
No, you can absolutely run your own independent server that communicates over the AT protocol. Either they understand this and are arguing in bad faith, which I think is very likely, or they are completely disinterested in facts. They merely want to spread and enforce a cultural and political norm. Maybe it’s both.
For example, I am using the ActivityPub protocol to post this, and it is sent to your folks’ inboxes via ActivityStreams through the WordPress ActivityPub plugin. Anyone can set up their own servers on their own hosts. I know for a fact that some feeds are being run off Raspberry Pis in people’s closets. In fact, some people have both fediverse instances and AT protocol PDSs running off the same Raspberry Pis in their closets. By this person’s reasoning, that would imply that Bluesky owns their ISPs, the closets, and the Raspberry Pis.
Yes, the practical network experience heavily depends on Bluesky-operated infrastructure. And yes, it is true that this is different from something like Mastodon on ActivityPub, where federation between independently operated servers is widely distributed and actually decentralized. I’m not contesting that the infrastructure is heavily dependent on and operated by Bluesky.
That’s not the claim @FediTips made. The claim they made is that everyone else’s computers—a cloud is just someone else’s computer, mind you—that use the AT protocol are owned by Bluesky. This statement is so absurd to me that I am not sure if this was a semantic error and not what they meant, or if it is exactly what they meant. If it is the former, it is still bad, because they were disinterested in fact-checking, which is my point. If they bothered to fact-check, they would have caught the inaccuracy or the semantic error.
Secondly, let’s say you know nothing at all about servers, protocols, etc.—you can just look it up.
https://atproto.com/guides/self-hosting
https://docs.digitalocean.com/products/marketplace/catalog/bluesky-social-pds
I was not aware that DigitalOcean was owned by Bluesky. That’s because they are not owned by Bluesky. DigitalOcean is to an AT protocol PDS what a Mastodon host is to Mastodon. If this person had simply done a five-minute search, they would realize that Bluesky does not own the independent servers for the Bluesky apps, albeit it controls the protocol architecture. Personally, I think @FediTips did it in bad faith, because multiple developers have corrected these Mastodon influencer accounts over and over again. At this point, it is propaganda.
I don’t care about this argument in particular. Rather, it’s an example of why I don’t argue with people online. They don’t check what they say or look up what the other person said because they are disinterested in facts. They are interested in the normative claim and cultural norms they are trying to spread and enforce. It’s basically a form of evangelizing and proselytizing.
Again, I don’t really care for this particular argument, which is why I never directly addressed it with them. What I am saying is that they were disinterested in easily accessible facts, so arguing with them to persuade them is a waste of my time.
People on social media care about culture first and facts second. I am not going after the people on Mastodon specifically. Redditors are infamous for this shit. If you ask me, Reddit and Discord are ground zero cases for this dumbass culture of reply guys.
I wrote my own Bayesian classifier and Markov algorithm a long time ago that curate only what I want to see in activity streams, so I don’t see whatever fuckery many of these idiots on social media are doing. I have my own Bayesian and Markov curation algorithm for activity streams and my own algorithm for feeds on Bluesky.
You can see the documentation for how Activity Streams, which is what ActivityPub uses, works here:
Activity Streams 2.0
https://www.w3.org/TR/activitystreams-core
I curate the ActivityStream of my inbox points to see posts based on relevance rather than chronological order. Most of the time, their nonsense is filtered out. I just had time to kill.
My PDS Doesn’t Participate in Bluesky’s Age-Verification Flow
So, apparently, with the last few updates Bluesky has done, they have expanded the regions that need to be age-verified to Ohio, and they are preparing to expand it to Australia. They have also made it so that people don’t have access to DMs or material labeled by Bluesky’s moderation services. I’ve been looking into the age verification system that Bluesky uses as I configure my own PDS. app.bsky.ageassurance.begin is an explicit API call that a client or PDS must intentionally call to start the age-verification process, hand the user off to the verification provider, and receive an updated age-assurance state.
Check here for the official Bluesky documentation:
This endpoint is part of the Bluesky application Lexicon APIs (app.bsky.*). Public endpoints which don’t require authentication can be made directly against the public Bluesky AppView API: https://public.api.bsky.app. Authenticated requests are usually made to the user’s PDS, with automatic service proxying. Authenticated requests can be used for both public and non-public endpoints.
https://docs.bsky.app/docs/api/app-bsky-ageassurance-begin
If a PDS does not call app.bsky.ageassurance.begin, the age-verification flow does not start. Age verification occurs on the client side, not the server side. If a PDS does not implement the app.bsky.ageassurance.* endpoints, it cannot interact with Bluesky’s age verification flow.
Georgia—where I currently live as of writing this post—does have age restriction and verification laws; however, they are weak, so Bluesky has not had to do much in my state. As a result, I’ve just been using Bluesky’s PDS. However, I have been setting up my own PDS. Since my PDS does not call app.bsky.ageassurance.begin, does not check getState, and does not read getConfig, it has no way to initiate age verification, determine whether a user is verified, or enforce or reflect any age-based restrictions. My PDS does not participate in Bluesky’s age-assurance system at all. That’s just one part of Bluesky’s moderation structure. Bluesky’s moderation services use labelers.
Labels and moderation
https://docs.bsky.app/docs/advanced-guides/moderation
Labelers are services or accounts that apply moderation labels. My PDS does not declare default app labelers, so it does not automatically trust any labelers. I have max user control with the minimum level of automated moderation.
If you do not want to through all of this trouble and want to keep using the native bsky.app, see here:
Bluesky’s age assurance sucks, here’s how to work around it.
https://gist.github.com/mary-ext/6e27b24a83838202908808ad528b3318
I am going to be honest. I hate Bluesky’s political, philosophical, and ideological values, think its moderation is heavily flawed, and am more closely aligned politically with the Fediverse. But… the Fediverse is not fun. It’s essentially people constantly commiserating, with no interesting or entertaining content, peppered with manifestos and “this bad thing happened in the news—be enraged by it” posts. I really wanted to love the Fediverse, but there’s nothing to do over there.
BlueSky’s Solution To Moderating Is Moderating Without Moderating via Social Proximity
I have noticed a lot of people are confused about why some posts don’t show up on threads, though they are not labeled by the moderation layer. Bluesky has begun using what it calls social neighborhoods (or network proximity) as a ranking signal for replies in threads. Replies from people who are closer to you in the social graph, accounts you follow, interact with, or share mutual connections with, are prioritized and shown more prominently. Replies from accounts that are farther away in that network are down-ranked. They are pushed far down the thread or placed behind “hidden replies.”
Each person gets their own unique view of a thread based on their social graph. It creates the impression that replies from distant users simply don’t exist. This is true even though they’re still technically public and viewable if you expand the thread or adjust filters. Bluesky is explicitly using features of subgraphs to moderate without moderating. Their reasoning is that if you can’t see each other, you can’t harass each other. Ergo, there is nothing to moderate.
Bluesky mentions that here:
https://bsky.social/about/blog/10-31-2025-building-healthier-social-media-update
As a digression, I’m not going to lie: I really enjoyed working on software built on the AT protocol, but their fucking users are so goddamn weird. It’s sort of like enjoying building houses, but hating every single person who moves into them. But, you don’t have to deal with them because you’re just the contractor. That is how I feel about Bluesky. I hate the people. I really like the protocol and infrastructure.
I sort of am a sadist who does enjoy drama, so I do get schadenfreude from people with social media addictions and parasocial fixations who reply to random people on Bluesky, because they don’t realize their replies are disconnected from the author’s thread unless that person is within their network. They aren’t part of the conversation they think they are. They’re algorithmically isolated from everyone else. Their replies aren’t viewable from the author’s thread because of how Bluesky handles social neighborhoods.
Bluesky’s idea of social neighborhoods is about grouping users into overlapping clusters based on real interaction patterns rather than just the follow graph. Unlike Twitter, it does not treat the network as one big public square. Instead, it models networks of “social neighborhoods” made up of people you follow, people who follow you, people you frequently interact with, and people who are closely connected to those groups. They’re soft, probabilistic groupings rather than strict labels.
Everyone does not see the same replies. Bluesky is being a bit vague with “hidden.” Hidden means your reply is still anchored to the thread and can be expanded. There is another way Bluesky can handle this. Bluesky uses social neighborhoods to judge contextual relevance. Replies from people inside or near your social neighborhood are more likely to be shown inline with a thread, expanded by default, or served in feeds. Replies from outside your neighborhood are still public and still indexed, but they’re treated as lower-context contributions.
Basically, if you reply to a thread, you will see it anchored to the conversation, and everyone will see it in search results, as a hashtag, or from your profile, but it will not be accessible via the thread of the person you were replying to. It is like shadow-banning people from threads unless they are strongly networked.
Because people have not been working with the AT Protocol like I have, they assume they are shadow-banned across the entire Bluesky app view. No—everyone is automatically shadow-banned from everyone else unless they are within the same social neighborhood. In other words, you are not part of the conversation you think you are joining because you are not part of their social group.
Your replies will appear in profiles, hashtag feeds, or search results without being visually anchored to the full thread. Discovery impressions are neighborhood-agnostic: they serve content because it matches a query, tag, or activity stream. Once the reply is shown, the app then decides whether it’s worth pulling in the rest of the conversation for you. If the original author and most participants fall outside your neighborhood, Bluesky often chooses not to expand that context automatically.
Bluesky really is trying to avoid having to moderate, so this is their solution. Instead of banning or issuing takedown labels to DIDs, the system lets replies exist everywhere, but not in that particular instance of the thread.
I find this ironic because a large reason why many people are staying on Bluesky and not moving to the fediverse—thank God, because I do not want them there—is discoverability, virality, and engagement.
In case anyone is asking how I know so much about how these algorithms work: I was a consultant on a lot of these types of algorithms, so I certainly hope I’d know how they work, lol. No, you get no more details about the work I’ve done. I have no hand in the algorithm Bluesky is using, but I have proposed and implemented that type of algorithm before.
I have an interest in noetics and the noosphere. A large amount of my ontological work is an extension of my attempts to model domains that have no spatial or temporal coordinates. The question is how do you generalize a metric space that has no physically, spatial properties. I went to school to try to formalize those ideas. Turns out they’re rather useful for digital social networks, too. The ontological analog to spatial distance, when you have no space, is a graph of similarities.
This can be modeled by representing each item as a node in a weighted graph, where edges are weighted by dissimilarity rather than similarity. Highly similar items are connected by low-weight edges, while less similar items are connected by higher-weight edges. Distances in the graph, computed using standard shortest-path algorithms, then correspond to degrees of similarity. Closely related items are separated by short path lengths, while increasingly dissimilar items require longer paths through the graph. It turns out that attempts to generalize metric spaces for noetic domains—to model noetic/psychic spaces—are actually pretty useful for social media algorithms, lol.