Today we’ve got a session at ANU College of Law PEARL centre, entitled Learning/Technology in Legal Education. The session is another version of the Society of Legal Scholars session held last year in St Catherine’s College, Oxford; but with two new speakers — Kristoffer Greaves, who wrote one of the articles in the original special issue of the Law Teacher last year, called Learning/Technology (paywall), and Scott Chamberlain, whose work on simulation in PEARL fits exactly with the workshop.
First up, Craig Collins, on ‘Story interface and strategic design for new law curricula’. He adapted the slides used in Oxford to focus more on design, which was interesting to see — and those slides were spun out of his article, so it’s developing. He described the traditional teaching/learning interface, its origins in Ramism. He advocated that narrative should be at the core of what we do in our teaching and learning in the law school. Story is the gateway to the analytical, in its power to motivate learners. He took the example of a property law textbook, and contrasted it with a trailer from a film, The Secret River. Powerful moment.
Next Kris, on ‘Computer-aided qualitative data analysis of social media for teachers and students in legal education’. who started with a moving story of how his deafness affected his ability to engage in higher education; and how digital tools enabled him to take part in law degrees and changed the rest of his life. He took the example of his own Twitter page, and the example of hashtag function that holds information, eg notmydebt. He then showed how, using Nvivo, he could capture others’ tweets, using the hashtag container. But what about manipulating and analysing that data? The fact that he’s pulled the hashtag data into Nvivo means that the nodes are intense collections of data, eg discarding retweets, and focusing on the original tweets. These can be collected in buckets, and search them. He mentioned that discourse analysis can be used to interpret the data. The tweets can also be located on a map (though he noted the use of VTN to disguise geo-location). Kris noted the speed with which this can be done using the digital tools. Kris then showed how concepts could be mapped using the tools, how word frequency could be used, and using a graphical spread, how the nodes or buckets related to each other.
He observed how this is affecting bibliometrics, and altmetrics. Eg measures of scholarship — engagement, visibility, influence, impact. Or how about dimensions of engagement with SoTL: information, communication reflection and conceptualisation of teaching. And yet, there are ethical and methodological considerations — how do we use big data, how do we square that with the ethical dimensions of research practice. He quoted de Certeau (1986) — scientificity: the ‘aura of truth, objectivity, and accuracy’. Just because it’s accessible doesn’t make it ethical… Fascinating paper, and the altmetrics on the Law Teacher page show how important his application of data tools to legal education will be to the future of what we do as teachers (and researchers), and what our students will do, too.
Next, Scott Chamberlain, describing his work on e-simulations, based on earlier work called Machiavelli’s Workshop, done well over a decade ago. He gave us an overview of the software — the scenario centre, the role play centre, the student centre, the teacher centre, etc. He showed wireframe mockups of process. No AI. It’s text-based (query — could we use Kris’s approach to digital tools to enable student learning in this context?). It allows for objective scores, no scores, or scores determined by polling players and non-players. The aim is for lecturers to design sims without requiring IT/coding skills. That was our aim in SIMPLE, too, but in my experience it’s a real struggle to do that. But Scott’s approach is remarkably based upon legal reasoning, rather than generic sim and coding approaches, so he may well be more successful in this. The demonstration of the wireframe was fascinating — so many different aspects (eg realistic automatic and the use bots in bot-driven role plays); but not just of simulation engine building, but (from research terms) the data that’s available for analysis. When? Beta in development, late 2017, publicly available in mid-late 2018.
My presentation was the last. Slides up in the usual place, under the Slides tab above.
Final session was the Panel session. On a question about how to use the tools in legal education, Kris compared the explicit and emergent approaches to data, showing how the emergent approach was much more flexible and powerful approach to data analysis. Other fascinating comments and questions from Anneka Ferguson, Jonathan Powle and others.