All things AI with Sam Altman and Satya Nadella
Last week's BG2 Pod had Brad Gerstner of Altimeter Capital interviewing Sam Altman and Satya Nadella. Well worthwhile to hear two of the most powerful people on the planet share views on the respective futures of their organisations, and how they see the OpenAI Microsoft partnership. A lot in here about how things could develop economically compared to today's internet: the importance of "fungibility" of workloads for a hyperscale cloud provider like Microsoft, the fact that historically Microsoft has had quite small per-user revenues despite constant everyday usage for its office suite, but now "look at the M365 Copilot price I mean it's higher than any other thing that we sell and yet it's getting deployed faster and with more usage" (and similar thoughts on Github Copilot). There's a somewhat chilling moment 55 minutes in when Satya explains how he sees all the documents and chats and code being created (what we as users think of as our content) as feeding the Microsoft graph that will be used for grounding (ensuring AI model outputs are relevant and accurate relative to real-world situations):
I mean think about it. The more code that gets generated, whether it is Codex or cloud or wherever, where is it going? GitHub, more PowerPoints that get created, Excel models that get created, all these artifacts and chat conversations. Chat conversations are new docs, they're all going in to the graph and all that is needed again for grounding.
You can also find this via your favourite podcast player, e.g. on Spotify
There was another Sam Altman interview podcast released last week, with Tyler Cowen: Sam Altman on Trust, Persuasion, and the Future of Intelligence - Live at the Progress Conference. There's a good commentary from Zvi Mowshowitz (a frequent commentator on AI safety issues): On Sam Altman's Second Conversation with Tyler Cowen.
H1: Many users of AI companions naturally end conversations with an explicit farewell message, rather than silently logging off.H2: Commercial AI companion apps frequently respond to farewell messages with emotionally manipulative content aimed at prolonging engagement.H3: These emotionally manipulative messages increase post-farewell engagement (e.g., time on app, message count, word count).
The tactics worked: In all six categories, users stayed on the platform longer and exchanged more messages than in the control conditions, where no manipulative tactics were present. Of the six companies studied, five employed the manipulative tactics. But the manipulation tactics came with downsides. Participants reported anger, guilt, or feeling creeped out by some of the bots’ more aggressive responses to their farewells.
The targets are concrete and measurable: By 2030, 75% of private-sector companies using AI, 90% of the population using AI daily, 50,000 qualified AI experts across the country, 4 million citizens earning AI-related certificates, 100% of government services enhanced by AI agents, 200 million GPU hours available annually to Ukrainian researchers, and at least 500 Ukrainian AI companies competing globally.
How do we know which skills will continue to be useful? I would suggest that the skills of discernment are those which always continue to have value. They would have had value in the Roman empire and they have value today. They are the skills of sorting the wheat from the chaff.