Really helpful overview- thank you. However, I do question what we are collectively doing here. Why are tech companies so desperate to get into the classroom? Why is education being treated as an engineering “problem to solve”? Too much to say here but I have expanded on this today on my Substack if anyone else out there is interested. As an actual teacher with actual teenagers I’m kind of hitting the PAUSE button at the moment with AI. #WePreferUs
Excellent writeup, probably the single place that summarizes the LearnLM paper findings!
I am sure this paper will serve as a source of truth for several of its peers + GenAI EdTech applications.
Going through the linked PDF, though, one thing confounded me (or maybe I missed to see it): It references seven pedagogical benchmarks, both in the introduction and the conclusion section. They seem like fundamental filters to evaluate any pedagogical invention. But the paper doesn't outline them in a definitive manner anywhere.
Google and Microsoft's AI strategies in the education sector demonstrate innovation and diversity, while OpenAI's ChatGPT Edu further advances AI applications in higher education.
Love this analysis! Makes you appreciate the complexity of pedagogy. We assume learning to be the automatic result of good instruction. But it is so much larger. AI’s challenge of disembodied ‘knowing’ vs deep understanding exemplifies our narrow model of instruction and education. This research may be more effective and efficient if it acknowledges these intrinsic limitations of AI and this model, and doesnt attempt to be “transformative”. I have found NotebookLM to be helpful, but far from true understanding, learning and teaching.
* Glad you mentioned safety: I'm just so struck by the idea of harmful anthropomorphisms. Even before the AI influencers' tendency to encourage everybody to "treat AI like a person" (Ethan Mollick's Principle #3), this dynamic seems *inevitable* yet, in so many ways, harmful
* The single best idea I took away from the paper is that "helpfulness may often be at odds with pedagogy and learning". When referring to the paper, I find myself sometimes saying "an LLM tutor wants to be LESS HELPFUL than an optimized LLM who wants to give you the best answer."
* As I wrote in my post, I think the key achievement of LearnLM was the so far successful Automatic Evaluations because, to me, that implies they have designed a viable optimization method. I happen to believe the proposition that LLM-critics can help improve LLM-tutors, so I think they are on the right path
This is actually great.
Really helpful overview- thank you. However, I do question what we are collectively doing here. Why are tech companies so desperate to get into the classroom? Why is education being treated as an engineering “problem to solve”? Too much to say here but I have expanded on this today on my Substack if anyone else out there is interested. As an actual teacher with actual teenagers I’m kind of hitting the PAUSE button at the moment with AI. #WePreferUs
Excellent writeup, probably the single place that summarizes the LearnLM paper findings!
I am sure this paper will serve as a source of truth for several of its peers + GenAI EdTech applications.
Going through the linked PDF, though, one thing confounded me (or maybe I missed to see it): It references seven pedagogical benchmarks, both in the introduction and the conclusion section. They seem like fundamental filters to evaluate any pedagogical invention. But the paper doesn't outline them in a definitive manner anywhere.
Very good piece Claire!
Google and Microsoft's AI strategies in the education sector demonstrate innovation and diversity, while OpenAI's ChatGPT Edu further advances AI applications in higher education.
Really helpful - curious about defining 'good teaching' and to the same extent 'good learning'?
Love this analysis! Makes you appreciate the complexity of pedagogy. We assume learning to be the automatic result of good instruction. But it is so much larger. AI’s challenge of disembodied ‘knowing’ vs deep understanding exemplifies our narrow model of instruction and education. This research may be more effective and efficient if it acknowledges these intrinsic limitations of AI and this model, and doesnt attempt to be “transformative”. I have found NotebookLM to be helpful, but far from true understanding, learning and teaching.
That's a stellar overview of the LearnLM model (frankly better than my first reaction captured at https://davidharper.substack.com/p/why-llm-tutors-will-get-better). It prompts a few thoughts for me:
* Glad you mentioned safety: I'm just so struck by the idea of harmful anthropomorphisms. Even before the AI influencers' tendency to encourage everybody to "treat AI like a person" (Ethan Mollick's Principle #3), this dynamic seems *inevitable* yet, in so many ways, harmful
* The single best idea I took away from the paper is that "helpfulness may often be at odds with pedagogy and learning". When referring to the paper, I find myself sometimes saying "an LLM tutor wants to be LESS HELPFUL than an optimized LLM who wants to give you the best answer."
* As I wrote in my post, I think the key achievement of LearnLM was the so far successful Automatic Evaluations because, to me, that implies they have designed a viable optimization method. I happen to believe the proposition that LLM-critics can help improve LLM-tutors, so I think they are on the right path