22 Jun 2025: Evolving software; Blissful attractors; interface ideas and modalities

A few links that captured my attention this week:

Andrej Karpathy: Software Is Changing (Again)

This talk from Andrej Karpathy at the Y Combinator AI Summer School has rightly drawn lots of attention over the last week. Well worth watching all the way through. Andrej studied with Fei-Fei Li at Stanford, helped found OpenAI and ran AI at Tesla (and coined "vibe coding"). Lots of perceptive metaphors. AI as electricity (via Andew Ng). Writing computer code was software 1.0, 2.0 is training neural networks and for 3.0 we can consider writing natural language prompts as programming. Present day LLMs are like using time sharing on mainframes in the 1960s. LLMs as "people spirits" (stochastic simulations of people). And finally, moving into designing for "partial autonomy" and building for agents. A great talk.

Claude's Bliss

From the great Things I Think Are Awesome (TITAA) newsletter. There's a really lovely piece here about how the training Anthropic have done on Claude's "character" can lead to a state of blissfulness between two Claude instances (as reported in the system card for Claude Opus 4 and Sonnet 4):

When two Claudes spoke open-endedly to each other: “In 90-100% of interactions, the two instances of Claude quickly dove into philosophical explorations of consciousness, self-awareness, and/or the nature of their own existence and experience. 

...

And then it gets mystical. Claude is still into Buddhism. In what testers called the “Bliss Attractor” state, Claude said things like, “The gateless gate stands open. The pathless path is walked. The wordless word is spoken. Thus come, thus gone. Tathagata.” 

There's a lot to digest here as we see more and more surprising emergent behaviours.

Post-Chat UI

Allen Pike has a great article here discussing lots of ways we may see non-chat UI patterns start to change as designers figure out ways to integrate LLM functionality. Examples go back to Maggie Appleton's piece 2 years ago about how different daemons (Language Model Sketchbook, or Why I Hate Chatbots) could help you, with different personalities (like a devil's advocate, or a synthesiser). This article looks at examples where the flexibility of typed or voice input, or automating more ambiguous tasks, can lead to interesting new design patterns. 

Where AI Provides Value

Security guru and general all-round perceptive commentator Bruce Schneier discusses a useful way to evaluate where current AI tools can help, with tasks that require one of: speed, scale, breadth of scope and "sophistication" (problems that require processing many separate factors).

Looking for bottlenecks in speed, scale, scope and sophistication provides a framework for understanding where AI provides value, and equally where the unique capabilities of the human species give us an enduring advantage.

Working with Google Gemini wearing Snap augemented reality spectacles

A nice demo from Matthew Hallberg, a design engineer at Snap, showing how Google Gemini can integrate with the Snap Spectacles (possibly the new ones coming next year) to perform various tasks within the field of view, outputting correctly anchored labels.

Why I don’t think AGI is right around the corner

Dwarkesh saying an obvious thing that needs saying: the instance of an LLM you're working with doesn't (yet) learn the way a person does over their lifetime; it is fixed.

How do you teach a kid to play a saxophone? You have her try to blow into one, listen to how it sounds, and adjust. Now imagine teaching saxophone this way instead: A student takes one attempt. The moment they make a mistake, you send them away and write detailed instructions about what went wrong. The next student reads your notes and tries to play Charlie Parker cold. When they fail, you refine the instructions for the next student.

This just wouldn’t work. No matter how well honed your prompt is, no kid is just going to learn how to play saxophone from just reading your instructions. But this is the only modality we as users have to ‘teach’ LLMs anything.






17 Jun 2025: Alignment of long running AI relationships, AI ghosts, the Anthropic story, hacking LLMs

As this is the first post there's a few more things, but generally I'll be aiming for 2-3 things to read or listen to per week.

Rick Rubin interviews Jack Clark of Anthropic (an episode of the Tetragrammaton podcast)

Really long (2 hours!), discursive, fascinating, lots of detail about how Anthropic came to be and visions for the future, as well as Jack's own background. Recommended as a good insight into how the founders of Anthropic are seeing the world develop. 

Black Mirror-esque piece from Ars Technica - can you stop people making an AI avatar of you after you're dead, using your voice, appearance, written content and so on? An introduction to the world of grief tech and grief bots.

Simon Willison has been patiently explaining the new kinds of security risks possible with LLMs (and coined the phrase "prompt injection" back in 2022). This is his most clear explanation yet of the three features that, if they are all present, open opportunities for attackers to steal data. A recent example was EchoLeak that showed how data could be exfiltrated via Microsoft 365 Copilot.

Why human–AI relationships need socioaffective alignment

Really loved this paper. Builds on work from the early days of the Web by the great Cliff Nass and others on how people relate socially to computers in surprising ways. Then shows how much of the current thinking on AI alignment doesn't really take into account what will happen as longer running relationships between people and AI models become more common. Many things that seem obvious in retrospect; that's always a good sign.

AI Isn’t Only a Tool—It’s a Whole New Storytelling Medium

Eliot Peper is a science fiction author who writes here about using AI as part of developing the setting and characters for the Tolans game / "AI friend". I loved these insights into the creative process using new tools in a creative way.

Black Forest Labs FLUX.1 Kontext

One of the interesting product launches, also featured at RAAIS. A much better image editor, maintaining the context from one image to the next (try this in ChatGPT and you'll see it re-creates much of a photo and loses the consistency). It turns out Black Forest Labs really are in the Black Forest - who says you have to be in the Bay Area?

Coding agents have crossed a chasm

A great personal perspective from David Singleton (ex engineering leader from Google and Stripe) on present day collaborative coding with AI tools. A realisation part way through this particular example was asking the model to generate a sequence diagram:

Instead of diving straight into more code analysis, I tried a different approach. I asked Claude to read through our OAuth implementation and create an ASCII sequence diagram of the entire flow.

This turned out to be the key insight. The diagram mapped out every interaction. Having a visual representation immediately revealed the complex timing dependencies that weren’t obvious from reading the code linearly. More importantly, it gave Claude the context it needed to reason about the problem systematically instead of just throwing generic debugging suggestions at me.

With the sequence diagram as context, Claude spotted the issue: a state dependency race condition. The fix was simple once “we” found it: removing the problematic dependency that was causing the re-execution.










Welcome

Given we're living through a period of exponential change in AI, and given I seem to be spending a lot of time trying to keep up with the AI news, I decided that once a week I'll publish some of the more interesting things I've seen. It's going to be pretty basic! They'll be things I've read or seen that week, but may have been published much earlier. I subscribe to all sorts of great sources (thank you all!), but I won't be reproducing them - this is really just my filtered view. And to be honest, I expect this is much more for my benefit than for anyone else's! It'll help me process and remember things better.