25 Aug 2025: Seemingly conscious AI & emotional agents; AI as 4 kinds of cultural technology; Separating work and personal AI memory

Emotional Agents

We must build AI for people; not to be a person (Seemingly Conscious AI is Coming)

Starting with a pair of related articles this week from Kevin Kelly (co-founder of Wired among many other things) and Mustafa Suleyman (co-founder of DeepMind and now leading AI at Microsoft). They have similar arguments:  It doesn't matter if an AI can really feel emotions, we'll have emotional relationships with AI anyway. And it doesn't matter if an AI is really conscious, the fact that it seems to be will trigger similar societal impacts. I explore similar themes in Could Annie Bot be powered by ChatGPT?, an exploration of whether present-day AI could fake being the robot character Annie in this year's award winning science fiction novel Annie Bot. Both articles share similar concerns about where this takes human society, and the extent to which we can course correct.

AIs do real things we used to call intelligence, and they will start doing real things we used to call emotions. Most importantly the relationships humans will have with AIs, bot, robots, will be as real and as meaningful as any other human connection. They will be real relationships.
My central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention.
Large language models are cultural technologies. What might that mean?

The latest post from Henry Farrell, continuing the theme started with Large AI models are cultural and social technologies. This is a long, thought provoking and dense article, but worth the time. It contrasts four ways of understanding LLMs:
  1. Gopniksim (after Alison Gopnik) is a stance that Farrell has contributed to, viewing LLMs as cultural and social technologies. "Just as written language, libraries and the like have shaped culture in the past, so too LLMs, their cousins and descendants are shaping culture now."
  2. Interactionism. In this view, the interaction between human and AI behaviours is what will give rise to new phenomena. "What is the cultural environment going to look like as LLMs and related technologies become increasingly important producers of culture? How are human beings, with their various cognitive quirks and oddities, likely to interpret and respond to these outputs? And what kinds of feedback loops are we likely to see between the first and the second?"
  3. Structuralism. This philosophical camp regards language as a system separate from its connection to reality or the people who use it, a system where an LLM is suddenly a new kind of language-generating technology, creating a new kind of artificial cultural artifacts.
  4. Role play. This references Murray Shanahan's perceptive take that LLMs are best understood as role playing different characters (Role play with large language models), which is a framing I've personally found illuminative.

There's no answer, this is the start of a longer thought process, and all four lenses may turn out to be useful.

BYOM (Bring Your Own Memory)

I agree with this prediction. We will build and retain the context and memory for AI systems over time, and will need to find ways to compartmentalise personal and work use. The analogy is with BYOD ("bring your own device"), where you can use work applications on a personal device, with appropriate security controls.

Nano Banana! Image editing in Gemini just got a major upgrade

This week's best new launch: much better image editing in Google Gemini. Eventually, gradual small improvements lead to a product feature that is a game changer. This feels like one. It just works, often enough.

Interesting that they tested it under the name "nano banana" in public head-to-head tests, before revealing it was a Google model.