I realise you said that this is a brief article and doesn't include everything, but I had a question on your curiosity principle.
On curiosity: I think it's interesting that you pick this out as a principle. As a former teacher I found motivating students to be the most important part of the job (and this has probably been the main barrier to previous edtech movements succeeding in supporting students who are hardest to reach, in my opinion). I think curiosity is a way into motivation. Specifically what are your thoughts on AI tutors supporting students to make connections? I have been inspired by Zurn and Bassett's work where they argue 'curiosity is edgework'.
Very interesting essay! The discussion of the five principles seems to suggest that the breakthrough has not yet been realized - as AI lacks empathy, emotional sensitivity, and is prone to hallucinations. Early examples (such as the Alpha Schools in the U.S) already show the limitations of AI in education.
Although the challenges already associated with AI are briefly mentioned, the essay does not address how these could be mitigated in the future.
Hello, I am a PhD student at Carnegie Mellon University working on AI for learning from a learning science perspective. Given the recent concerns about overblown expectations and potential misuse regarding the role of AI tutors, I find this article to be truly thought-provoking.
However, I approach the conclusion that "AI tutors should not approximate human tutors" from a slightly different angle. In my view, while an AI tutor cannot fully replicate the social and emotional functions of a human tutor, inheriting and refining the cognitive and evaluative functions that human tutors have historically performed remains a crucial first milestone. I see great potential, particularly in areas where human subjectivity has been a limitation, such as absolute grading and formative assessment, to be supplemented by statistical evidence and precise modeling.
This perspective aligns with the sophisticated student modeling approach developed in the Cognitive Science tradition at CMU. For example, while Cognitive Tutor is a classic ITS that predates deep learning, it demonstrated significant effectiveness in mastery learning through a rich domain model of student knowledge states. My research also leans towards designing LLM tutors not primarily as models that generate useful feedback instantly for the student, but by first building an explicit student model that finely simulates the student's learning process, and then designing the LLM to interact with this model via an interface like an MCP.
Intuitively, I feel that "modeling how a student learns" is a less challenging problem than "end-to-end learning of how to teach a student," because the latter must implicitly solve the former anyway. If the cognitive characteristics from which the five principles (active learning, cognitive load management, personalization, curiosity stimulation, and metacognition) emerged are sufficiently well-reflected in the student model, the tutoring policy designed to maximize its expected gain would inherently follow those principles. When aiming for pedagogical instruction following, achieving the essential goal of personalization, beyond mere 'style imitation,' would require a massive amount of data. Wouldn't explicitly modeling the individual under instruction compensate for this limitation?
Another point I found intriguing was your final statement that "Teaching is about how to be a human being." While I wholeheartedly agree with this, I feel there is room to differentiate between K–12 education and adult/higher education. In K–12, the role of the human teacher is clearly paramount. However, for adult learners, I anticipate that an AI tutor, acting as an interactive interface providing rapid feedback on knowledge structure failures, could accelerate their individual meaning-making process. Mentorship will remain a critical human role, but I believe the possibility exists for AI to significantly supplement, or in some cases surpass, the human role of "information mediation."
Thank you for an article that provides so much food for thought. It is personally reassuring to know that someone with such a deep learning science perspective is operating within the team building such an influential product.
This is super interesting, thanks so much.
I realise you said that this is a brief article and doesn't include everything, but I had a question on your curiosity principle.
On curiosity: I think it's interesting that you pick this out as a principle. As a former teacher I found motivating students to be the most important part of the job (and this has probably been the main barrier to previous edtech movements succeeding in supporting students who are hardest to reach, in my opinion). I think curiosity is a way into motivation. Specifically what are your thoughts on AI tutors supporting students to make connections? I have been inspired by Zurn and Bassett's work where they argue 'curiosity is edgework'.
Very interesting essay! The discussion of the five principles seems to suggest that the breakthrough has not yet been realized - as AI lacks empathy, emotional sensitivity, and is prone to hallucinations. Early examples (such as the Alpha Schools in the U.S) already show the limitations of AI in education.
Although the challenges already associated with AI are briefly mentioned, the essay does not address how these could be mitigated in the future.
Hello, I am a PhD student at Carnegie Mellon University working on AI for learning from a learning science perspective. Given the recent concerns about overblown expectations and potential misuse regarding the role of AI tutors, I find this article to be truly thought-provoking.
However, I approach the conclusion that "AI tutors should not approximate human tutors" from a slightly different angle. In my view, while an AI tutor cannot fully replicate the social and emotional functions of a human tutor, inheriting and refining the cognitive and evaluative functions that human tutors have historically performed remains a crucial first milestone. I see great potential, particularly in areas where human subjectivity has been a limitation, such as absolute grading and formative assessment, to be supplemented by statistical evidence and precise modeling.
This perspective aligns with the sophisticated student modeling approach developed in the Cognitive Science tradition at CMU. For example, while Cognitive Tutor is a classic ITS that predates deep learning, it demonstrated significant effectiveness in mastery learning through a rich domain model of student knowledge states. My research also leans towards designing LLM tutors not primarily as models that generate useful feedback instantly for the student, but by first building an explicit student model that finely simulates the student's learning process, and then designing the LLM to interact with this model via an interface like an MCP.
Intuitively, I feel that "modeling how a student learns" is a less challenging problem than "end-to-end learning of how to teach a student," because the latter must implicitly solve the former anyway. If the cognitive characteristics from which the five principles (active learning, cognitive load management, personalization, curiosity stimulation, and metacognition) emerged are sufficiently well-reflected in the student model, the tutoring policy designed to maximize its expected gain would inherently follow those principles. When aiming for pedagogical instruction following, achieving the essential goal of personalization, beyond mere 'style imitation,' would require a massive amount of data. Wouldn't explicitly modeling the individual under instruction compensate for this limitation?
Another point I found intriguing was your final statement that "Teaching is about how to be a human being." While I wholeheartedly agree with this, I feel there is room to differentiate between K–12 education and adult/higher education. In K–12, the role of the human teacher is clearly paramount. However, for adult learners, I anticipate that an AI tutor, acting as an interactive interface providing rapid feedback on knowledge structure failures, could accelerate their individual meaning-making process. Mentorship will remain a critical human role, but I believe the possibility exists for AI to significantly supplement, or in some cases surpass, the human role of "information mediation."
Thank you for an article that provides so much food for thought. It is personally reassuring to know that someone with such a deep learning science perspective is operating within the team building such an influential product.