This article REALLY should not be overlooked:
J. Findley, et al., JD-Next: A Valid and Reliable Tool to Predict Diverse Students’ Success in Law School, 20 J. Empirical Legal Studies 134 (2023).
From the abstract:
Admissions tests have increasingly come under attack by those seeking to broaden access and
reduce disparities in higher education. Meanwhile, in other sectors there is a movement towards
“work-sample” or “proximal” testing. Especially for underrepresented students, the goal is to
measure not just the accumulated knowledge and skills that they would bring to a new academic
program, but also their ability to grow and learn through the program.
The JD-Next is a fully-online, non-credit, 7-10 week course to train potential JD students in case
reading and analysis skills, prior to their first year of law school. This study tests the validity and
reliability of the JD-Next exam as a potential admissions tool for juris doctor programs of
education. (In a companion article, we report on the efficacy of the course for preparing students
for law school.)
In 2019, we recruited a national sample of potential JD students, enriched for racial/ethnic
diversity, along with a sample of volunteers at one university (N=62). In 2020, we partnered with
17 law schools around the country to recruit a cohort of their incoming law students (N=238). At
the end of the course, students were incentivized to take and perform well on an exam that we
graded with a standardized methodology. We collected first-semester grades as an outcome
variable, and compared JD-Next exam properties to legacy exams now used by law schools (the
LSAT, including converted GRE scores).
We found that the JD-Next exam was a valid and reliable predictor of law school performance,
comparable to legacy exams. For schools ranked outside the top-50 we found that the legacy
exams lacked significant incremental validity in our sample, but the JD-Next exam provided a
significant advantage. We also replicated known, substantial racial and ethnic disparities on the
legacy exam scores, but estimate smaller, non-significant score disparities on the JD-Next exam.
Together this research suggests that, as an admissions tool, the JD-Next exam may reduce the risk
that capable students will be excluded from legal education and the legal profession.
The companion paper testing efficacy of the JD-Next program for improving law school grades is available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3845577.
This next piece sponsored by the good folks at the Skynet Law Offices:
J. H. Choi and D.B. Schwarcz (Minnesota), AI Assistance in Legal Analysis: An Empirical Study (August 2023, ).
Can artificial intelligence (AI) augment human legal reasoning? To find out, we designed a novel experiment administering law school exams to students with and without access to GPT-4, the best-performing AI model currently available. We found that assistance from GPT-4 significantly enhanced performance on simple multiple-choice questions but not on complex essay questions. We also found that GPT-4’s impact depended heavily on the student’s starting skill level; students at the bottom of the class saw huge performance gains with AI assistance, while students at the top of the class saw performance declines. This suggests that AI may have an equalizing effect on the legal profession, mitigating inequalities between elite and nonelite lawyers.
In addition, we graded exams written by GPT-4 alone to compare it with humans alone and AI-assisted humans. We found that GPT-4’s performance varied substantially depending on prompting methodology. With basic prompts, GPT-4 was a mediocre student, but with optimal prompting it outperformed both the average student and the average student with access to AI. This finding has important implications for the future of work, hinting that it may become advantageous to entirely remove humans from the loop for certain tasks.
[Posted by Louis Schulze, FIU Law]