9 Nov 2025: Nadella and Altman conversation; AI's emotional manipulation; Ukraine's agentic state

All things AI with Sam Altman and Satya Nadella

Last week's BG2 Pod had Brad Gerstner of Altimeter Capital interviewing Sam Altman and Satya Nadella. Well worthwhile to hear two of the most powerful people on the planet share views on the respective futures of their organisations, and how they see the OpenAI Microsoft partnership. A lot in here about how things could develop economically compared to today's internet: the importance of "fungibility" of workloads for a hyperscale cloud provider like Microsoft, the fact that historically Microsoft has had quite small per-user revenues despite constant everyday usage for its office suite, but now "look at the M365 Copilot price I mean it's higher than any other thing that we sell and yet it's getting deployed faster and with more usage" (and similar thoughts on Github Copilot). There's a somewhat chilling moment 55 minutes in when Satya explains how he sees all the documents and chats and code being created (what we as users think of as our content) as feeding the Microsoft graph that will be used for grounding (ensuring AI model outputs are relevant and accurate relative to real-world situations):

I mean think about it. The more code that gets generated, whether it is Codex or cloud or wherever, where is it going? GitHub, more PowerPoints that get created, Excel models that get created, all these artifacts and chat conversations. Chat conversations are new docs, they're all going in to the graph and all that is needed again for grounding.

You can also find this via your favourite podcast player, e.g. on Spotify

There was another Sam Altman interview podcast released last week, with Tyler Cowen: Sam Altman on Trust, Persuasion, and the Future of Intelligence - Live at the Progress Conference. There's a good commentary from Zvi Mowshowitz (a frequent commentator on AI safety issues): On Sam Altman's Second Conversation with Tyler Cowen.


Not a surprising or new idea, but a great paper from the Ethical Intelligence Lab at Harvard Business School. They contrast the understanding we already have of "choice architecture" (like the opt-out button that says "No, I like paying full price") with the more recent phenomenon of emotionally manipulative engagement design in AI systems. They look specifically at AI companions (like character.ai or Replika), and the moment where a user decides to disengage. 

This paper examines three hypotheses:

H1: Many users of AI companions naturally end conversations with an explicit farewell message, rather than silently logging off.
H2: Commercial AI companion apps frequently respond to farewell messages with emotionally manipulative content aimed at prolonging engagement.
H3: These emotionally manipulative messages increase post-farewell engagement (e.g., time on app, message count, word count).

They find that a meaningful percentage of users do indeed say good bye when finishing a session, particularly the more engaged ones. This cue can then elicit the emotional manipulation, with examples shown below. 


The tactics worked: In all six categories, users stayed on the platform longer and exchanged more messages than in the control conditions, where no manipulative tactics were present. Of the six companies studied, five employed the manipulative tactics. But the manipulation tactics came with downsides. Participants reported anger, guilt, or feeling creeped out by some of the bots’ more aggressive responses to their farewells.

Ukraine's Agentic Ambition: Building the World's First AI State Under Fire

“We are going to become the first country to introduce an agentic state" - Mykhailo Fedorov, Ukraine’s Deputy Prime Minister and Minister of Digital Transformation. "A government powered by artificial intelligence that doesn’t just respond to citizen requests but anticipates them, acting proactively to deliver services before they’re even asked for."

Many countries have AI strategies now, but it is worth paying attention to Ukraine's. The ambition is believable, given the speed of innovation that's been happening during the war. Lots to think about in here, but the fact that (for example) Ukraine now has over 500 drone companies gives a good sense of the recent growth.

The targets are concrete and measurable: By 2030, 75% of private-sector companies using AI, 90% of the population using AI daily, 50,000 qualified AI experts across the country, 4 million citizens earning AI-related certificates, 100% of government services enhanced by AI agents, 200 million GPU hours available annually to Ukrainian researchers, and at least 500 Ukrainian AI companies competing globally.

Thanks to a member of the Exponential View community for this link.


Finally this week, a nice piece by author Naomi Alderman (I recommend her book The Power if you didn't come across it). Very much the "AI is a normal technology" argument, advising young people about the skills she believes will still be vital as AI adoption grows.

How do we know which skills will continue to be useful? I would suggest that the skills of discernment are those which always continue to have value. They would have had value in the Roman empire and they have value today. They are the skills of sorting the wheat from the chaff. 

I agree with her analysis when considering today's AI; I am less sanguine that tomorrow's AI won't have human-level discernment abilities in specific domains.