Two Weeks with the Extinction Tool

I read a book that said AI might kill everyone. I'm now using it as a daily tool and enjoying it, and I don't know what that makes me.

Two weeks ago I read "If Anyone Builds It, Everybody Dies". I could follow the reasoning. There's no surprise in the book, the title is it. The arguments appear serious enough, and that, even if the authors were inflating the risk, we would still be looking at a sheer cliff humanity is bound to fall off. So, what if they are right?

It's yet another problem we see developing in the world under our very eyes — and if you already feel overwhelmed with climate change, the rise of fascism, the genocides, or even the more (not so) "mundane" problems with Big Tech; then by all means do not read the book. As I said, the title is explicit enough. Going further and feeling like it's going to happen "tomorrow" might lead you down a spiralling path to depression. Becoming a fatalist would not help anybody – you least of all.

I'm writing this with Claude open in another tab. I asked it to help me structure my thoughts. The conversation sharpened my points, gave me an idea of the structure (which I didn't really follow because I am still writing this off the cuff, not like an actual essay). Classic meta approach: using the thing to write about using the thing. You either laugh with a twisted smile, or else you'll cry yourself bone-dry.

So here I am, two weeks after reading the book. Two weeks in which I have been heavily using AI tools. For real tasks (at work) and for quick one-off itches (at home) I got to answer in the spur of the moment (or after a furious evening in a half-dazed state). I am having fun. The feedback loop is insanely fast, getting the tools to ask the right questions to make you go deeper, to make you clarify; that is addictive. (I am aware it's not rocket science, there are tons of books on how to do this — and that is exactly why LLM can do it.) I finally get that part of the hateful hype, the "multiplier" aspect. And I am living it.

What am doing here? What kind of person reads "this might end humanity" and then spends the next fortnight playing with that very BFG toy? I don't know. This is why I am writing this, I don't feel comfortable right now. Sure, I am "washing my own laundry" under public scrutinity, but that's the lesser shameful thing I am doing here.


For a while I wanted to write about my ambivalent views on modern AI (I mean LLMs). There are too many reports around the very problematic ethics of the training processes (aka modern slavery), the ethics of aggregating humanity's knowledge (I mean: mostly stealing it) and selling it with claims of fair-use (yeah, right), or just more and more aggravating bubble factors.

To be honest, I have lost momentum. To be even more honest, I don't have original thoughts to contribute to the big picture so I'll stick to linking a few articles. There's usually more where it comes from, so browse around. (I am being told this paragraph is a cop-out. Yeah, it just is.)

Also, I cannot ignore my actual use of said technology anymore. I am not embodying an ethical standpoint (if I ever was). I am being hypocritical and there's no hiding it. If you strip out the ethics, it's just using available tools to do your work. But you can't strip out the ethics, can you? (My current side project is about –eventually– fighting Big Tech. The road is a long one.) Sometimes if feels like fighting wildfires with gasoline. Too often, it feels like turning your back on the idealised version of yourself.


Today's piece was really personal. Contemplating the abyss with a loathing disgusted envy. I don't know what I'm doing. I'm going to keep doing it anyway.

I'd appreciate linking to similar pieces from fellow thinking meatbags, especially if it took a form of good-old-times webring. I don't need validation, but I'd like to leverage what humanity has to offer — empathy.