On inequality and AI risks

Friday morning I read Jenny's piece on how the ultra-rich make the world worse. Friday evening I attended a rationality meetup about If Anyone Builds It, Everyone Dies. Sunday afternoon, an on-site PauseAI meetup.

Three events, one weekend. Some clarity in the end.

If you haven't read the book mentioned above, or read my previous reflections on the matter, let me quote them directly:

If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.

We do not mean that as hyperbole.

— Eliezer Yudkowsky & Nate Soares, “If Anyone Builds It, Everyone Dies” (2025)

Today on the menu is a kind of diary-form post, not an essay.

For good measure, let's refer to somebody else's definition:

Artificial superintelligence (ASI) is a hypothetical type of AI with intellectual, self-improving and analytical abilities that surpass any human intelligence. With essentially limitless cognitive capabilities, ASI could act as a revolutionary force across all aspects of life.‌‌

— Built In, “What Is Artificial Superintelligence (ASI)?” (2025)

The immediate danger isn't ASI emerging from a data-centre next week. It's the tech oligarchs who already have too much power and who are racing toward AGI without adequate constraints. They're the ones accumulating resources. They're the ones who'll decide whether to pause or accelerate.

Jenny's article laid it out: the ultra-rich don't just have wealth, they have structural power. They shape policy, they control platforms, they set the direction for entire industries. The lunatic tech bros aren't some future threat. They're here. They're already dangerous.

Inequality isn't adjacent to the AI risk problem—it's the root cause.

This reframes my concern. And so now, I'm afraid of this: what happens when someone with billions decides to dedicate an entire AI data-centre to unsupervised recursive training? Not necessarily because they're evil (though some say it might be true too), but because they're convinced they're the hero in this story (humanity's) and nobody can stop them. I don't need to add anything to the dumpster fire they maintain themselves.

Current LLMs have training memory (their knowledge base) and runtime memory (chat context, RAG, tool use). We've seen what happens when you combine these—the interactions do really get better. It's not a breakthrough, it's engineering: assembling existing tools in the right configuration.

But what if an AI had actual means to direct its own training? Not hypothetically, but with reserved compute, minimal supervision, and a poorly-specified goal like "achieve AGI"? How long before the next real breakthrough? Is the path to ASI exponential once you start that feedback loop?

I don't know. From what I read, nobody knows. That's the point from the book that got me: we don't understand how this technology works internally. We can only observe outputs. By the time we notice something's wrong, it might be too late. We might not even notice. Remember that the outputs are the only thing we can observe, and we now know that not seeing does not equate not happening.

The scalability problems the AI industry is facing help—training data constraints, energy consumption, hardware limits. These buy us (some) time. But they're not permanent barriers, they're engineering problems with engineering solutions.

The PauseAI meetup was only the second one for this group, with half being newcomers (like myself). In the end, we tried to keep the focus practical and active: advocacy with elected officials supplemented by grass-roots awareness campaigns. Not trying to convince tech CEOs to slow down—instead talking to politicians who can ask the right questions. "What's being done about this threat? This is real. Why aren't we talking about it?" Building regulatory capacity. Making sure governments have the technical literacy to regulate effectively. It is clear that only a few people are aware of the existential risk, we can change that.

That's the actual work: international regulatory frameworks. Antitrust enforcement. Shifting the narrative away from "move fast and break things." The tech bros won't self-regulate. They've said so explicitly. So the pressure has to come from somewhere else.

(As a side note, from an engineering and craftsmanship standpoint, I've always disliked this attitude outside of experiments. And when this gets applied to people's livelihood—and in this case, existence—well… very-much-not-happy-face.)

One more takeaway from Friday: it's okay to use current technology. You don't have to be technophobic to be scared of ASI. The compute for training GPT-4 models has already been spent. Using these tools for coding or writing doesn't meaningfully contribute to AGI/ASI risks. The training runs are the problem, not the inference.

There are enough problems with LLMs as they exist today—environmental costs, labour displacement, copyright issues, bias amplification—to justify refusing them entirely. I understand that stance, and I appreciate it.

Two meetups, one article, one weekend. The inequality angle ties everything together. Smart people are working on this. That helps. (Sid, that's your cue: I wanted to link to your blog post, but it's still in your head…)