Second keynote, this time from Ludwig Bull, a student from Cambridge (when did you last attend a legal ed conference, or indeed any conference, where a student presents…?). Avid readers of my blog will remember that I’ve already posted on his achievements. He started his keynote with a 3-D model of citations of Donoghue vs Stevenson, as a good example of technology applied to core legal literature, and of Ludwig’s central argument, stated briefly in the title.
L. directs two companies, LawBot and DenninX. LawBot is a chat application, effectively the automation of legal advice-giving, preparing a person to take a claim further. DeninnX is a search engine for English case law, where the data is automatically produced.
Decoding the law is L.’s focus – the creation of tools for lawyers to do their job better. He described the process of encoding and decoding legal data, and how users of law, including lawyers, could use tools to help them better understand legal information. He drew a distinction between internal vs external change, and argued that lawyers simply do not develop the toolset to map it onto a legal problem, unless it’s something in the order of document review, to solve larger problems in social understanding of law and legal process.
So what practical steps can be taken? L. advocated that a knowledge of data tools and processes are essential to help us understand law better. He compared data analysis education with reading, and the comparison is an intriguing one. At questions, I raised three related issues:
1/ Apomediation
I’ve written about apomediation here, and in last year’s Law Teacher special edition on legal education / technology:
Disintermediation occurs everywhere there is digital presence, and this applies as much to patient and client-based services as to industrial and retail processes. Eysenbach for instance described disintermediation as a process where the advice of expert health professionals was being supplemented by consumers and patients who were gaining access to unfiltered information. He therefore suggested a role for apomediaries, as he termed them, online guides to enable patients to interpret the vast amount of health information and data online, and to assist them to make decisions on the basis of that information. They helped users to navigate problems such as informational overload, and used collaboration to enable users to scale, filter, recommend and bookmark information and virtual communities.[1]
It seems to me that LawBot is a good example of apomediation in law, though of course this by no means exhausts its potential use.
2/ Textual literacy / data literacy analogy
L. was rather diffident about his literacy analogy but I think it works well. Look at it in this context. Before the 1789 revolution in Paris, there were around 60 newspapers throughout France. As Simon Schama points out in his history of the French Revolution, there followed an explosion of communication genres, both in type and quantity, following the overthrow of censorship.[2] By the middle of 1792, for instance, there were around 500 newspapers in Paris alone. Many of them were short-lived, with tiny circulations. But what is remarkable is the explosion of communication channels as well as the sheer increase in volume – newspapers and gazettes with a huge range of formats and tone; subscription journals; and illustrated literature such as almanacs, copies of speeches, prints, engravings and the like. The sales figures also point to a remarkable literacy among the general population. As Schama remarks, ‘literacy rates in late eighteenth-century France were much higher than in the late twentieth-century United States’, and it was this literacy that, through the media of posters, brochures, reviews, journals, almanacs, fantasy novels, pornography and non-fiction of many kinds, fed the appetite of the people, in Paris and beyond, for information about the political and cultural events of the revolution.
Fast forward to another example, the 2014 independence referendum in Scotland which bears no comparison with the French Revolution, except one, and that is the extent to which people rapidly became engaged in a political process, and needed to think, read, argue and communicate about that process, and within a short timescale. See this post particularly. The context to political debate was essential to stimulating the debate. What will stimulate the creation and deployment of data education in law schools? Will it be genuinely interested debate, or the pressure of legal regulators, or the power of the marketplace?
3/ Law and data – what sort of educational relation?
The Strathclyde joint law & computing degree, started in the early 1990s, I believe. It was innovative in its day, possibly the first such joint degree in the UK, influenced probably by the earlier legal education examples at Chicago-Kent. It was a degree that gave students coding and computing knowledge and skills. What L. is arguing for, though, is the development not of coding skills and knowledge but of data analysis and an understanding of how such analytical tools can be used by a wide variety of people involved in the administration of justiciable problems, by policymakers, regulators, lawyers, law students, and by the general public. As I understand it, he’s advocating levels of such understanding, which is as it should be.
I think I wanted to hear more from Mr Bull on a whole range of issues. The presentation was perhaps rather too introductory. Nevertheless, there were fascinating discussions and issues raised on the subject — which of course is the subject of the legal hack that Nigel Hudson and I are down to facilitate tomorrow…
- [1]G. Eysenbach, “From Intermediation to Disintermediation and Apomediation: New Models for Consumers to Access and Assess the Credibility of Health Information in the Age of Web2.0” (2007) 129, Studies in Health Technology and Informatics 162↩
- [2]Schama, S. (1989) Citizens. A Chronicle of the French Revolution, London, Penguin Books, 180↩