Panel session: The Role of AI and Legal Education, Westminster U Law School

A couple of weeks ago I spoke on a panel session at a one-day conference organised by Westminster U Law School – The Role of AI and legal education: Preparing the Next Generation of Lawyers.  I was on annual leave at the time, in Florence, so attended only the panel not the whole conference, but many of papers seemed fascinating to me from what I read of them in abstracts.  Hope to see them out there in some form or other.  With me on the panel were:

  • Samuel Dahan, Queen’s University, Canada
  • Alex Nicholson, University of Leeds, UK
  • James Faulconbridge, Lancaster University, UK
  • Joan Loughrey, Queen’s University Belfast, UK
  • Luke Mason, Westminster University, UK (Chair)

We were set a predictable question by our chair from the outset – what’s the biggest challenge we face in this area?  There were useful replies from the various perspectives of the panelists.  I said identifying the biggest was problematic but I did mention the following as causing concern, among much else:

  1. I can’t remember any other technology that’s been so criticised or vilified.  And with good reason.  Kate Crawford sets out the broad case against the social and ecological impacts of GenAI; in her excellent SubStack Helen Beetham sums up so many aspects of real educational concern; Phillipa Hardman is excellent on both positives and negatives of human / machine learning.  And the more I read about what’s (mis)called hallucination the more I’m worried.  As Iain Thomson points out in this revelatory article, ‘The fundamental problem is that AI models are trained to reward guesswork, rather than correct answers. Guessing might produce a superficially suitable answer. Telling users your AI can’t find an answer is less satisfying.’  This is a purely commercial approach to truth-telling.  It produces bullshit.  It’s the opposite of professional and academic care for truth and ethics in communication (eg health care, safety tech industries, legal advice).
  2. AI literacy is an illiterate concept. It’s like saying everyone should be literate.  Really? In which literacies, to which standards?  Literacy is always linked to social backgrounds, textual contents & contexts, and communities of practice/readers.  And therefore to wider social issues of access, exclusion, cost, availability, power, censorship.
  3. On thinking… AI doesn’t introduce a new kind of thinking. Its very presence reveals what actually requires thinking.  In doing that, it follows much else in the digital revolution. But in AI learning as an act itself can be massively augmented and improved.  It’s in our hands to create the assets, the engines, the resources to enable that to happen.
  4. AI reinforces what we knew about student engagement. In a report from the Online Learning Consortium participants turned to AI when ‘they believed assignments were repetitive, redundant or just busy work’.  On engagement, ‘AI could deepen emotional engagement such as interest, meaningfulness and belonging.’ Or it could do the opposite.  Up to us, as always, to create the socialising context.
  5. How we research… We need to build research organisation tools to investigate the results of our research.  Other disciplines already doing this eg medical education – research summaries, metareviews, policy reviews.   Eg AMEE’s BEME guide published in Medical Teacher.  Or see the incredibly useful Stanford U Lane Medical Library’s guide to ‘AI in medical education (general topics[only general!])’, where each article was annotated by gpt-4o using the following prompt:

    “Provide a 75-word summary of this uploaded article by highlighting the key findings that medical educators, MD school directors, faculty, and instructors can use to re-design medical school curricula, modify or add new courses, and define AI competency for MD students.” 

  6. …Which informs what we need to do for our future.
    1. Create sources of informed trust. And maintain them.
    2. Constantly examine the values that we hold and share that are always at play in our thinking and discourse.
    3. Collaborate – legal education centres shd collaborate. Book series, journals, etc.  Use organisations such as BILETA, our scholarly bodies, to come together and to develop a research platform and resources.

In free discussion I found my mind drifting to a to a comparison of GenAI with earlier tech shifts.  A few posts back on this blog I was talking about my earliest digital experiences in 1989, which were so formative for me.  But being in Florence, an extraordinary medieval & Renaissance city which in the scribal cultures (ie prior to the pre-moveable type revolution) of the fifteenth century inhabited an entire street full of booksellers serving not just ecclesiastical or noble clients but European universities too as well as those who could afford to buy texts for learning, the historical tech comparisons were irresistible. I return again to the key rhetorical, epistemological and jurisprudential issue: that the forms of the tech tools created back in the eleventh & subsequent centuries to understand & analyse Justinianic texts profoundly influenced not just the dissemination or reception of Roman law, but the very analysis of it.  And that this point about material practices, at every tech shift, applies again and again.  But because we are historically amnesiac about the influences of material practices, we are blind to the nature of such processes.

A good example is that of Langdell’s case case method in the later nineteenth century.  His method was predicated on a number of innovations – the conventional lecture theatre turned 90 degrees on itself, mode of lecture discourse: these appear to be the big ticket items in the Langdellian revolution.  But crucially and less visibly were the casebooks.  Langdell’s original casebooks were simply collections of reported and reprinted cases.  Only later did they contain secondary resources, commentary, etc.[1]See C. Woodard, “The Limits of Legal Realism: An Historical Perspective” (1968) 54 Virginia Law Review, 722 for critique of the approach and its consequences.[\ref]  But Langdell’s publishing innovation could not have come about without the disintermediating revolutions of mid-nineteenth century printing and publishing, in Europe and the USA.  These included the invention of cylinder presses to replace Gutenberg’s flatbed press, of rotary presses that printed both sides of a page in one operation, the use of pulped wood in place of pulped rag, the invention of folding and stitching machines.  All these increased exponentially the volume and standardization (though not necessarily the quality or longevity) of printed productions, and all took place before 1870.  Later inventions such as linotype and typesetting hugely increased the speed of the production of text.[2]See A. Weedon, Victorian Publishing: The Economics of Book Production for a Mass Market, 1836-1916 (Aldershot, Ashgate Publishing, 2003).[\ref]

The same is true of GenAI.  Not only does it force us to rethink thinking about our educational practices – it compels us to imagine new ways of understanding, using, and communicating about, law in education, rather as glossed literatures did in the early centuries of the last millennium.  And its effects on the current material cultures of legal education are as yet unknown; but will become clear to us in time.  Let’s only hope they are clear to us before predatory corporate publishers take advantage of them and us.  Or has that ship already sailed?

My grateful thanks to Dr Marloes Spreeuw for the invitation (and for her wonderful organisation of the whole event).  We need many more such social events before we can begin to comprehend the enormity of the impact that AI will have on our legal educations.


Discover more from Paul Maharg

Subscribe to get the latest posts sent to your email.

Discover more from Paul Maharg

Subscribe now to keep reading and get access to the full archive.

Continue reading