16 Nov 2025: Will AI tutoring help; Speaking to ghosts; AI-powered nimbyism; Gemini in Google Maps

The Algorithmic Turn: The Emerging Evidence On AI Tutoring That's Hard to Ignore

Carl Hendrick is a professor in Amsterdam who's an expert in how we learn and teach. This is a well balanced article from someone with a long history in the field. He looks at the current contradictory situation. A GPT-4 tutor has been shown to outperform in-class learning delivered by highly rated instructors in a rigorous but small scale study (there's many caveats so do read the article). But we know that AI systems are being trained to solve our problems and answer our questions, and that's very different to a good teacher's behaviour. His thesis is that we could have a significant improvement in student learning: we have 100 years of learning science research that shows the way, AI systems are infinitely patient and more importantly will improve exponentially in a way that will be replicated globally and at speed. I found this an insightful piece.

What has become clear is that LLMs designed for education must work against their default training. They must be deliberately constrained to not answer questions they could easily answer, to not solve problems they could readily solve. They must detect when providing information would short-circuit learning and withhold it, even when that makes the interaction less smooth, less satisfying for the user. This runs counter to everything these models are optimised for. It requires, in effect, training the AI to be strategically unhelpful in service of a higher goal the model cannot directly perceive: the user’s long-term learning.

The implications are sobering. If many current uses of AI in education are harmful, and if designing systems that enhance learning requires sophisticated understanding of both pedagogy and AI behaviour, then the default trajectory is not towards better learning outcomes but worse ones. Students already have unrestricted access to tools that will complete their assignments, write their essays, solve their problem sets. They are using these tools now, at scale, and in most cases their teachers lack both the knowledge to distinguish harmful from helpful uses and the practical means to prevent the former. The question is not whether AI will transform education. (It clearly already is). The question is whether that transformation will make us smarter or render us dependent on machines to do our thinking for us.

And from his concluding section:

Perhaps the answer is that teaching and learning are not the same thing, and we’ve spent too long pretending they are. Learning, the actual cognitive processes by which understanding is built, may indeed follow lawful patterns that can be modelled, optimised, and delivered algorithmically. The science of learning suggests this is largely true: spacing effects, retrieval practice, cognitive load principles, worked examples; these are mechanisms, and mechanisms can be mechanised. But teaching, in its fullest sense, is about more than optimising cognitive mechanisms. It is about what we value, who we hope our students become, what kind of intellectual culture we create.

What if the loved ones we've lost could be part of our future?

2Wai founder and Canadian actor Calum Worthy posted this video a few days ago, causing quite a stir. He's pitching their AI avatar creation app as a way to preserve a memory and representation of a loved one after they've died. Like others, you'll likely be reminded of the episode of Black Mirror called Be Right Back (2013, series 2 episode 1 - watch the trailer). But of course the idea of talking to people who've died didn't start with Charlie Brooker, and you can find similar themes all the way back to Odysseus consulting the dead or ghosts in Homer's Odyssey, right up to digitally recorded minds in William Gibson's 1984 book Neuromancer.

Worthy’s post containing the ad garnered just 6,000 likes, but plenty of critical responses slamming the technology as inhumane attracted much more favour from X users. One user said the app is “objectively one of the most evil ideas imaginable,” garnering 210,000 likes. Another user similarly said: “a former Disney Channel star creating the most evil thing I’ve ever seen in my life wasn’t really what I was expecting,” gaining 139,000 likes. A user got 12,000 likes calling the app “demonic, dishonest, and dehumanizing,” stating they would never want to have an AI-generated persona on the app because “my value dies with me. I’m not a f—ing avatar.” Other users suggested the app—which is free to download but offers premium avatars and digital items for purchase—profits off of grief and could be an unhealthy way for people to deal with loss.

- Forbes - Disney Channel Star’s AI App That Creates Avatars Of Dead Relatives Sparks Backlash 

We've had the ability to create realistic, high-fidelity video and audio clones of people for a while, from companies like Synthesia in London for instance, so 2Wai is interesting mostly given their apparent willingness to venture into one of the biggest ethical minefields.

AI-powered nimbyism could grind UK planning system to a halt, experts warn

A good example of what'll be a growing trend: AI systems removing friction from a previously heavy process, and as a result enabling bigger business or societal shifts. In this case, someone's made a specialised AI system called Objector for objecting to UK planning applications. Like many such systems, there's a danger that a future iteration from OpenAI, Anthropic or Google will eat their lunch. But in the meantime, they're pointing the way towards a specific intervention. A bit like the decidedly non-AI "delay repay" nationwide scheme across the UK for claiming compensation for late trains (that used to be a higher friction process). The objection to Objector is that it could "cause the planning system to “grind to a halt”, with planning officials potentially deluged with submissions." The article mentions an AI system on the other side of the fence: Consult is designed to analyse responses to government proposals. The arms race of using AI to manage a flood of AI-generated responses or objections has been apparent for some time in recruitment, with the rise of AI assisted CVs and cover letters. In How AI is breaking cover letters (archive version), the Economist explain how now the perfection of an LLM-generated cover letter removes what was previously a relied-upon friction and evaluation stage in the process. In Friction Was the Feature, product manager John Stone gives further examples like product reviews, warranty claims and university admissions. I expect we'll see many more cases where AI highlights processes that relied on human effort to create friction, and that will now experience an accelerated flow.


Another step towards AI ubiquity: talking to Gemini while using Google Maps (that estimates say has over 2B active users worldwide). The example query is "Is there a budget-friendly restaurant with vegan options along my route, something within a couple miles?" This is not currently such an easy query to do. Certainly in the context of using voice while busy navigating, the advantages are clear. Google claim that their extensive map, streetview and location data will provide grounding that stops these models hallucinating very often.

Thanks to Iskander Smit for the link.