How I successfully used an LLM, or my Simon Willison moment

Simon Willison has a legendary throughput. No, seriously. Not only is he a co-creator of Django, he has been publishing interesting software (see Datasette and its gazillion plugins) at an outstanding pace. More recently he has taken the task of making LLM's easy to interact from the CLI with its llm program. In today's post, I write about my first real success with an LLM. I shall update my subtitle to "He who prompted okay once".

I am very cautious about LLM's in general. It's not magic. It's simply getting the most likely next token over and over again with a bit of randomness sprinkled on top. (Feel free to shout at me on Mastodon if this is terribly wrong. Just be sure you're more right than I am, it's annoying to get yelled at for the wrong reason.) I'd like computers to help me more with some tasks I'd rather not do, so I gave it a shot today.

"What did you do?" I hear you ask. I instructed an LLM to draft paragraphs for an upcoming conference presentation.


Let's not get crazy over here, give me the benefit of the doubt, will you? I had to write an actual abstract and an outline for that talk. No lying, it had to be genuine because who on Earth would want to read (and even more attend) a proposal written by a bot? I'd certainly feel all sorts of negative emotions. But here, I don't feel bad feeding my own writing to a machine and see what comes out of it.

My grand plan is to write the talk ("Event Sourcing in production") in full, make some blog posts out of that, and then distil it back into a slideshow for the conference. And of course, I am experiencing writer's block.

So I fired up a chat session with a local LLM (Meta Llama 3 8B) and asked it to help out overcome the blank page syndrom. I had little expectations in terms of length and quality, and no fear of losing my soul given that I would rewrite the whole thing at least twice. Here is what I instructed the LLM with:

!multi
Draft a few paragraphs for the following talk about "Event Sourcing in production". The outline is:
1. The concepts (5 minutes)
   ⇒ Brief explanation and definition of event sourcing as well as the relevant parts of DDD and CQRS
2. Evolution (10 minutes)
   ⇒ Strategies to deal with changes in the domain models
3. Projections (_aka_ read models in CQRS) (10 minutes)
   ⇒ How to efficiently query your domain, also to support new requirements
4. Runtime (10 minutes)
   ⇒ Synchronous vs asynchronous requests, horizontal/vertical scaling
5. Tying up and sharing what we learnt (5 minutes)
   ⇒ References to companion examples and documentation to share our experience in patterns (this talk from another angle)
!end

There were a few hiccups on the way. For instance I could only get so many characters after which the LLM would stop answering, forcing me to instruct it to "continue" until it gave the full answer. When time came to expand on the specific Python library (eventsourcing) in section 4, the LLM spat out platitudes about sync vs. async and scaling. Pretty much what I wrote in the outline but useless for me. "Ah, right. I did not actually mention the library in the prompt. PEBKAC"

I tried again with the fourth section, this time specifying to focus on "the Python library eventsourcing". I did get more fluff with actual relevant bits. Unfortunately the library changed quite a lot in 2021 and that is not reflected in the LLM dataset. The text is now completely inacurrate.

I could go on with more prompts and refinements. Maybe try to retrieve fresher content from the library's current documentation. Yet, it does not matter here. I am not looking for a very precise and accurate answer, only for some breadcrumbs to avoid staring at a blank screen. And that, is a success.


For good measure I ran the same script with ChatGPT 3.5 and also got the same useless fluff for the runtime section. Platitudes in ⇒ platitudes out. Not sure why I even bothered, my prompt is the problem in this case.

As a sidenote, all examples provided by the LLM were about order systems. That matches my own exposure to online "litterature" about event sourcing, lots of online store examples. Here and now, I shall vow to use a different example myself.

My takeaway is that, yes indeed, LLMs are useful to summarise well-known and well-spread concepts and ideas. And yes, you still need to know something about the topic you ask about to spot the plain wrong answers.


On the Fediverse…

I advertised this post on Mastodon