BILETA22: Day 2: Parallel session: Future Technologies

Post-lunch, and still motoring here, though battery levels dropping.  Final parallel paper session of the conference and I’m in on the Future Tech stream, hanging on to the coat-tails of presenters’ expertise across a dizzying array of topics and technologies.  Sitting in on sessions like these when it’s not your expert area sure soaks up brain power…

But first up is Catherine Easton, now Head of Law School at Lancaster U Law School.  I remember her dynamic first paper at UKCLE, years back, on lecture clickers, and was really impressed.  Now she’s talking about ‘Embedding Legal and Ethical Principles in the development of secure Autonomous Systems’.  She introduced and outlined work on the node of the UKRI Trustworthy Autonomous Systems (TAS) programme, funded through the UKRI Strategic Priorities Fund, which is delivered by the Engineering and Physical Sciences Research Council (EPSRC).  She outlined the research methods and collaborative inter-disciplinary working practices, and stakeholder activities. She also described the extension and development of the Ethical, Legal and Social Issues (ELSI) in technology methodology, and made predictions for embedding socio-technical considerations into security-focused autonomous systems. I especially liked ‘Backcasting’ – a shift to describing desirable futures and how to achieve them rather than only describing likely futures.  She focused on co-design – autonomy, beneficence, cooperation, consent, data protection, dignity, diversity, equality.  I especially liked ‘beneficence’ – very Francis Hutcheson…  The work was daunting but rewarding: collaborative, interdisciplinary working and writing was crucial.

Next: ‘Is the regulation of biometric systems within the proposed AI Act fit for purpose?’, Chloe Haden, University of Hertfordshire. According to her in her abstract, the proposed Artificial Intelligence Act (AIA) set out by the European Commission in April 2021 aims to harmonise rules for the development and use of Artificial Intelligence (AI). There have been rising concerns that the proposed AIA is not fit for purpose. In respect of this; and she argued that much more clarification on the protection of citizen’s fundamental rights are needed, particularly in terms of systems that process biometric data. The AIA, she claimed, only addresses very minimal practices in which the technology can be used. In light of this, it’s clear that much stronger safeguards are needed to ensure effective protections to citizens, whilst also encouraging the ethical growth and use of AI technology. The concern of mass surveillance has risen exponentially with the growth of AI, and with several reports of bias and unfairness infiltrating systems, it is crucial to find the balance between protecting the privacy and security of citizens, whilst also avoiding over-regulating and preventing innovation of systems which could bring strong benefits in the future.  Clear, well-oganised, well-spoken presentation; and slides contained a wealth of data on her research.

Now over to online: ‘No-choice architecture: legal implications of persuasion and manipulation in online services and digital products’, presented by Silvia De Conca, Vrije Universiteit Amsterdam. Nudging, manipulation, dark patterns, emotional AI: these are only some of the terms indicating digital products and services capable of hijacking users’ decisional mechanisms, to gain companies data, engagement, and profit.  European and national legislators are beginning to focus their attention on these persuasive techniques. The field is, however, riddled with ambiguous terminology, and the proposed interventions are confusing, with the risk of under- or over-inclusiveness.

Silvia distinguishes those techniques operating at the level of user interface, from those embedded in the design and functioning of a service or product, in the ‘algorithm’.  She then discussed what features and applications of persuasive techniques challenged the existing legal protection of individual’s rights and interests. Her presentation tried to do two things: first, to clarify if and to what extent a regulatory intervention is necessary.  Second, to dissect the purposes and rationales of a possible regulatory intervention. She did not offer a definitive answer concerning persuasive design techniques.  Rather, her purpose was to clarify which questions should be asked, to reframe the debate – and interventions – on the issues.  Interesting on the research lines, eg the factors enhancing persuasion she aligns with the concept of kairos – being at the right place in the right time – and signalling community.  Not quite the theological meaning, but very suggestive. She noted how the techniques triggered the user to engage with a device or product, providing a reward fulfilling, yet leaving the user wanting more, and providing the simplest behaviour in anticipation of a reward.  She put the techniques on a scale: persuasion – nudge – manipulation – deception – coercion.  We know this goes on, we’ve all experienced it; but all the same it’s chilling to see it set out in detail in this fine paper.

Now on to Phoebe Li, ‘Regulating Trustworthy Autonomous System (TAS): AI in healthcare, co-written with Robin Williams, Daria Onitu, Stuart Anderson.  Phoebe introduce the project.  It was funded by the UK Engineering and Physical Sciences Research Council (EPSRC). According to her, AI has the power to transform and scale up mass population diagnosis, such as diabetic retinopathy and lung cancer. But the deployment of AI systems is fraught with risks and uncertainties. Currently there are still many gaps and uncertainties in the regulatory system in relation to governing and managing the risks arising from data, software, hardware, various actors, and patients as individuals and as a population.  She observed that after Brexit the UK diverged from the EU regulatory approaches. Phoebe and co-writers reviewed the key issues mapped at a stakeholders’ workshop held in January 2022. They examined a range of legal instruments with a special focus on regulating AI and Software as Medical Devices (SaMD), including the EU Medical Device Regulation 2017, the proposed Artificial Intelligence Act 2021, and the UK Medicines and Medical Devices Act 2021. They contrasted the comparative regulatory models between the EU and the US, with a view to navigating a future regulatory direction for the UK.  Problems: problems with data, anticipating changes, regulation modes and regulatory burdens.  The problems sounded rather like legal education regulation all over…  Fascinating paper

Penultimate paper: ‘User vs Machine: The Use of Automated Content Recognition in User-Generated Content Platforms’, from Sevra Guler Guzel, University of Hertfordshire.  She’s dealing with the implementation of Article 17 of CDSM Directive, its obligations and the problems they cause.  According to her, the CDSM Directive falls short of providing robust safeguards regarding users’ fundamental rights, especially their freedom of expression and the arts. The obligation for incorporation of automated content recognition tools, which historically was found problematic on fundamental rights aspect, constituted the primary cause of the concerns.  User-generated content platforms are among the most important intermediaries for user expressions.  But copyright enforcement within these platforms is administrated by automated and algorithm-dependent decision-making. In practice, these tools assess lawfulness and decide the fate of user uploads.  However, the danger is that these automated tools become judges of online legality. To prevent this outcome, Sevra recommended the involvement of users as neutral actors in the decision-making progress regarding user-generated content uploads.  She also recommended robust safeguards as confirmed by the Guidance and AG opinion in the Poland case; clear concepts/definitions in national implementations; equal importance given to Articles 17(4) and 17(7) and resolution of the failure to implement sufficient copyright moderation practices and the protecttion of user rights and interests.  Wide-ranging, detailed treatment of the issues involved.

Final paper is from James Griffin (BILETA treasurer): ‘How an automated 3DP licensing platform works – the use of our watermarking system’ (University of Exeter).  James is focusing on the technologies involved and links to the current law, and demonstrates how their teams are progressing in working with blockchain in 3D printing. Through a combination of projects funded by the AHRC, Newton Fund, Ningbo Science and Technology Bureau and the Li Dak Sum Fellowship (how long did it take to put all that together?), the author has enabled the development of a licensing platform for 3DP printed content. Early parts of the project have focused on the development of watermarking technologies, which were successfully patented. Project partners now were focusing on the development of the watermarking by linking it to blockchain to provide for background licensing.  Last month they ran their first workshop in China. This will revolutionise the distribution of 3DP content. For more info, look at the profile page of the author, under ‘impact and engagement.’  Never having worked with either 3D or 4D printing I was pretty lost in this, and well-nigh gave up when he mentioned using IBM quantum computers.  Yet this is a future tech session and it seemed to a complete outsider to be a remarkable future technology with multiple regulatory issues.

I had to miss the final wrap-up plenary and end of conference.  Tomorrow, on my two-day schlep back to Skye, I’ll try to put some final thoughts together.  Not in a fit state to do that just now.