As a follow up on my recent post, “Paranoid Androids,” I redesigned a practice essay exercise using Mr. Roboto. Prior to my mock exam class, I told students it was their responsibility to study for the exam. I told them that I would input my audio transcripts and slides into Ai, ask Ai to generate a one page cheat sheet and then provide that cheat sheet to students on the mock practice. I hoped to teach them the importance of metacognition and self-regulated learning regardless of their use of Ai. If a student doesn’t understand how to apply a concept, no amount of Ai will save them from grave mistakes in legal analysis.
I want them to know how to use Ai – meaning they must have knowledge of the law first so that they can catch the Ai mistakes.
The Ai cheat sheet produced errors when synthesizing a semester’s worth of course content, and I intentionally left the mistakes on the cheat sheet. Why? Because my true test was assessing students that caught many mistakes. I promised a debrief of the test after the simulation. Little did the students know that the debrief was to show them the mistakes in the cheat sheet. Had they studied the content on their own, they were far more likely to catch these mistakes (and some even brought them up during the debrief). If they didn’t prepare accordingly and chose to rely solely on Ai – their mock exam score suffered.
During the debrief several students echoed their appreciation for an exercise that (1) acknowledged Ai efficiency yet (2) showed them the pitfalls of substituting these tools for individual learning.
At the end of the lesson, I gave them an assignment to research current trends of Ai use in legal practice, and offered a bonus point on the final grade to those that crafted a lesson plan that could teach the Ai research skill as an exercise in a law school classroom… stay tuned for more on that soon!
(Amy Vaughan-Thomas)
