I attended ‘Testing for Law School Admission & Licensing’ — Michael Kane & Peter Pashley. The session concerned how measurement affects the ‘pipeline’ into the profession. Peter introduced the Law School Admissions Council — he’s principal research scientist. LSAC started in 1947, and the first real LSAT administered in 1948 (though as I point out in Transforming Legal Education it’s arguably the case that Thorndyke’s innovations at Columbia Law School in the 1920s were the start of the LSAT initiative). Membership now includes 199 ABA-approved law schools, 1 Canadian law schools, 1 Australian (Melbourne?). Volumes are well down — 25% from two years ago. Similar downturn in 1996-2000. Whether this downturn is a similar blip remains to be seen, though Peter was fairly upbeat about it.
Pashley makes the point that the LSAT gives information, as a standardized test, that acts against elitist schools and backgrounds. It was created to assess skills critical for success in law, has promoted access for students with varied socioeconomic backgrounds, and should be used with other measures. Applicant files usually includes school application, LSAT sore(s), writing sample, ugrad grade point average, personal statement & letters of recommendation.
What does standardization consist of? There are parallel test forms, re content & stats properties. Irregularities are investigated; the exam is timed in 35 minute sections, it is pretested under operational conditions, and scores are ‘equated’. Critical reasoning tests are written by philosophers, largely. Standardising process takes around 40 months — item written, soundness & sensitivity reviews, assembled int pretest section, admin, assembled into pre-operational section, admin, tested, signed offSurprisingly, not a computerized test — paper & pencil.
Reliability estimates of items range from 0 to 1. Closer to 1 the better, LSAT is around 0.93. Security is well-organized (one reason not to go into digital domain, he argued). Do the skills tested relate to law school performance? The report LSAC Skills Analysis Law School Task Survey analyzed this — in his opinion the LSAT did pretty well. I disagree.
LSAT predictive validity median results? He contrasted results of LSAT with FYA and UGPA, and correlated results with law school performance. There’s undoubtedly correlation, and while Pashley had to proceed quickly, it’s clear there are a lot of issues here requiring unpacking (and the literature bears this out).
In their work the LSAC tries to ensure fairness & equity. I wasn’t convinced nor did he address the issues raised by the research literature. I’d have liked less on the ‘who we are’ and more on these critical issues.
Michael Kane was up next, on Bar Exams. The differences between Bar Exams & LSAT were described. Bar exams are pass/fail; LSAT puts applicants on an aptitude scale. Schools use LSAT to identify ‘best’ candidates; the profession uses the Bar Exam as a filter. There’s a basic interpretation of licensure exam scores, based on knowledge, skills and judgment. He observed that exam tasks should be relevant to practice, but they don’t have to mimic practice, because actual practice can be complicated and ambiguous. He contrasted this with a competence model of licensure. Didn’t really understand his points here.
He showed data from a study that contrasted U-GPA, LSAT, 4-pt L-GPA, Index-based L-GPA, and total NY Bar Score. The study looked for correlations between the four. The correlations with the Bar Exam scores and law school GPA seemed to indicate that there were correlations between law school performance and Bar Exam performance. Further analysis seemed to say that if you do well in college you’re likely to do well in law school, and do well in the Bar Exam.
He did make the point, and I agree with him, that it’s difficult to validate the predictive interpretation of licensure exams because developeing a good criterion of success for professional practice; failing candidates don’t get to practice; ‘success’ in practice depends on many variables, in addition to professional skills. So strong predictive rhetoric should be avoided. I disagree. I think that there are other ways of going about this — the research of Papadakis & colleagues in medical education is an instance of that.
Had the odd feeling that I was in the wrong room. Was this the session I signed up to?