The Algorithmic Turn: The Emerging Evidence On AI Tutoring That's Hard to Ignore
Carl Hendrick is a professor in Amsterdam who's an expert in how we learn and teach. This is a well balanced article from someone with a long history in the field. He looks at the current contradictory situation. A GPT-4 tutor has been shown to outperform in-class learning delivered by highly rated instructors in a rigorous but small scale study (there's many caveats so do read the article). But we know that AI systems are being trained to solve our problems and answer our questions, and that's very different to a good teacher's behaviour. His thesis is that we could have a significant improvement in student learning: we have 100 years of learning science research that shows the way, AI systems are infinitely patient and more importantly will improve exponentially in a way that will be replicated globally and at speed. I found this an insightful piece.
What has become clear is that LLMs designed for education must work against their default training. They must be deliberately constrained to not answer questions they could easily answer, to not solve problems they could readily solve. They must detect when providing information would short-circuit learning and withhold it, even when that makes the interaction less smooth, less satisfying for the user. This runs counter to everything these models are optimised for. It requires, in effect, training the AI to be strategically unhelpful in service of a higher goal the model cannot directly perceive: the user’s long-term learning.
The implications are sobering. If many current uses of AI in education are harmful, and if designing systems that enhance learning requires sophisticated understanding of both pedagogy and AI behaviour, then the default trajectory is not towards better learning outcomes but worse ones. Students already have unrestricted access to tools that will complete their assignments, write their essays, solve their problem sets. They are using these tools now, at scale, and in most cases their teachers lack both the knowledge to distinguish harmful from helpful uses and the practical means to prevent the former. The question is not whether AI will transform education. (It clearly already is). The question is whether that transformation will make us smarter or render us dependent on machines to do our thinking for us.
And from his concluding section:
Perhaps the answer is that teaching and learning are not the same thing, and we’ve spent too long pretending they are. Learning, the actual cognitive processes by which understanding is built, may indeed follow lawful patterns that can be modelled, optimised, and delivered algorithmically. The science of learning suggests this is largely true: spacing effects, retrieval practice, cognitive load principles, worked examples; these are mechanisms, and mechanisms can be mechanised. But teaching, in its fullest sense, is about more than optimising cognitive mechanisms. It is about what we value, who we hope our students become, what kind of intellectual culture we create.
What if the loved ones we've lost could be part of our future?
2Wai founder and Canadian actor Calum Worthy posted this video a few days ago, causing quite a stir. He's pitching their AI avatar creation app as a way to preserve a memory and representation of a loved one after they've died. Like others, you'll likely be reminded of the episode of Black Mirror called Be Right Back (2013, series 2 episode 1 - watch the trailer). But of course the idea of talking to people who've died didn't start with Charlie Brooker, and you can find similar themes all the way back to Odysseus consulting the dead or ghosts in Homer's Odyssey, right up to digitally recorded minds in William Gibson's 1984 book Neuromancer.
Worthy’s post containing the ad garnered just 6,000 likes, but plenty of critical responses slamming the technology as inhumane attracted much more favour from X users. One user said the app is “objectively one of the most evil ideas imaginable,” garnering 210,000 likes. Another user similarly said: “a former Disney Channel star creating the most evil thing I’ve ever seen in my life wasn’t really what I was expecting,” gaining 139,000 likes. A user got 12,000 likes calling the app “demonic, dishonest, and dehumanizing,” stating they would never want to have an AI-generated persona on the app because “my value dies with me. I’m not a f—ing avatar.” Other users suggested the app—which is free to download but offers premium avatars and digital items for purchase—profits off of grief and could be an unhealthy way for people to deal with loss.
- Forbes - Disney Channel Star’s AI App That Creates Avatars Of Dead Relatives Sparks Backlash
We've had the ability to create realistic, high-fidelity video and audio clones of people for a while, from companies like Synthesia in London for instance, so 2Wai is interesting mostly given their apparent willingness to venture into one of the biggest ethical minefields.
AI-powered nimbyism could grind UK planning system to a halt, experts warn