Discussion about this post

User's avatar
croxis's avatar

I ran into that paper recently! There are definitely A LOT of things to consider in regards to LLM AI in education. I'm just n=1, but here have been my experiences.

AI models continue to improve. My experiences last year may not be true now.

I appreciated that the linked study identified the different degrees ai was used in the programmers workflow. But *how* LLM is used matters. In programming it can be used to write code, or to reference documentation, or to organize the programmers thinking, etc etc. As far as I could tell the study reported the degree of AI use, but I didn't catch any *how*

I also used an LLM to help me write physics problems for a buoyancy unit. I noticed that the story problems lacked that scaffolding finesse that helps guide students into developing a deeper understanding of the underlying phenomenon. This is the "do it for me" type of task.

On the other hand it did feel useful to have a conversation with it when developing my unit. I just recently picked up physics, I am the only physics teacher in the school, and university was 20 years ago. I call this a "talk with an expert" mixed with "finding a solution" task. I didn't blindly follow the results, and was also referencing open stax textbooks.

Some, like diffit, are fantastic for very dedicated tasks, such as converting a text into multiple reading levels. The prompts it also generates are sometimes useful too, but I try to be consistent with my student response prompts and graphic organizers.

My metacognition is probably spoofing me on how well it helped my wire my unit. Yet I'm fairly firm in my conviction that a tool with a very specific function, like diffit, speed up my ability to adapt curriculum

Expand full comment
Mark Goodrich's avatar

Thanks for drawing attention to this study. Too many studies on use of AI rely on self-assessment of whether it helped.

Expand full comment
3 more comments...

No posts