Sirious stuff

I see Siri’s in the news again — there’s now a proxy available.  An earlier article commented on the use of Siri, saying he or she (she from now on) mostly has been

quizzed on her relationship with Hal 9000 (she won’t discuss it), asked for stories (she has one to tell, if pushed) and deluged with requests for illegal drugs (she’ll find nearby addiction treatments) and to open the pod-bay doors (greeted with a sigh).

And there’s been lots of comment on her capacity to engage in dialogue.  Which is funny but all a bit condescending.  I feel a lot of sympathy for Siri, even though I don’t have an iPhone 4S and therefore can’t meet her; because in 2007 I spent quite a while imagining what the experience of working with her and her distant descendants might be like, in 2047.  Cue misty flashback to the future…

Back in 2007, in Transforming Legal Education, I wrote the final section of the Conclusion as a hubristic account of a student, Anna, studying in 2047 (section available here).  She has at her disposal software agents, ie autonomous entities that can proactively process information in distributed digital environments.  These agents are by 2047 ubiquitous in all organisational structures – police, government, healthcare, education, and so on.  She also has an avatar, Vikki, which has a holographic representation in the world (the following quotes are from the extensive notes on the environment Anna lives in — rather like the footnotes in Ellmann’s life of James Joyce, they took over the text and became the real text of the book’s ending — appropriate, I thought, for a book about transformation):

The avatar is based on a much more sophisticated version of [software agent technology], with a model of space and cyberspace equivalent to our own. Here the software robot processes information silently for the user (unless there is a need to dialogue) and in collaboration with other software agents with whom it has been designed to collaborate. It collects information, knows how and when to present it, can summarise it, find elaborated versions of the information, check the integrity of the information and check its own integrity and those of other agents. It can perform complex version-control tasks, and can operate in closed information systems (for example, healthcare patient records within Ardcalloch, if it has authority), semi-open (for example, electronic auctions) and open information systems. When it fails, it can analyse its actions and report to AgentDomain. It self-heals if injured. If fatally damaged, it suicides. There was major litigation in the 2020s on the subject of agents which established that a simple ‘bot’ such as the one described above, was a res not persona. There is, however, an ongoing debate that the most advanced forms of avatars … are approaching the status of sentient being, and should accordingly be treated as a human being.

Students and trainees are not the only ones with software agents, of course. As one of the first articles on the Semantic Web, way back in the early years of the century put it, ‘Teacher agents will track professional interests of teachers relating to their field of subject expertise, developments in new pedagogies with active evaluation and testing of pedagogical interventions. Teacher agents will assist teachers in routine marking tasks, record keeping, and document control for assessments requiring manual effort’.[1]  By 2047 they are common in every area of society. The Legal Resources course teaches students how to set up and fine-tune an agent for legal research.

Am I serious about this stuff?  Absolutely.  I think it’s one of the major development routes for online education, and in two ways.  First, it’s a tool for learning that brings together so many of the affordances of the internet, online presence, social presence and networks.  Such tools are always breakthroughs, as the book was for learning in the fifteenth century; but we should always remember that such breakthrough moments are often the result of earlier innovations and achievements.  As I point out in chapter 5, on medieval glosses, in many respects the basic tools for learning were invented not by book publishers, but by the thirteenth century scholars who developed highly sophisticated retrieval, archiving, textual comparison, searching and collation tools for texts.  Parkes summarises the general direction:

The late medieval book differs more from its early medieval predecessors than it does from the printed books of our own day. The scholarly apparatus which we take for granted – analytical table of contents, text disposed into books, chapters, and paragraphs, and accompanied by footnotes and index – originated in the applications of the notions of ordinatio and compilatio by writers, scribes, and the rubricators of the thirteenth, fourteenth, and fifteenth centuries.[2]

Second, someone like Vikki becomes more than a tool for learning — she is a relation to learning.  We already have relationships with the things, the rei of learning.  Think of books — what are your favourite books?  No, not the titles, the actual thing itself?  One of mine is Walter Benjamin’s Illuminations, introduced by Hannah Arendt — it’s a battered Fontana paperback, lined with my marginalia, yellowing pages, pages falling out because poorly pasted in, and I wouldn’t give it up for anything because it’s part of my life experience — the ding an sich reminds me of my postgrad days, of the early days in education studies when I returned to Benjamin with a new awareness of his thought, and to my re-reading at various stages since then.  Benjamin challenged me to think anew on so many levels, and the book embodies that, as well as being a deeply personal object, part of the network of conversations with others, writing, thinking in many rooms in many places.  When I think  of Benjamin my mind goes back to the photo of him on the cover, not other texts, eg the remarkably impressive recent edition of The Arcades Project.  The book is part of my dialogue with him: the thing embodies the relation.

Now think: who was your favourite teacher?  Not necessarily a teacher in school or university — maybe someone else — parent, sib, partner, lover, work colleague?  What was it that made him or her so?  Surely at least part of the explanation is that the social relation between you and the other, as well as the discovery of knowledge and growth in learning because of the social link and context, lies at the heart of that memory.  The relationship, in other words, leads us to think about our relationships to knowledge, how and what we know, what we can do or need to do in the future — can sometimes call into question our very sense of who we are.

Is it possible to have that sort of relationship with a software agent, an android?  Certainly not with Siri, and not until quite a few generations of her children have been and gone.  But back to the future in 2047…

Early agents, constructed in the 2000s, were based upon AI algorithms derived from the communications protocols of pilots and air traffic controllers. They were not successful. Only when agents were programmed on the more tacit procedures in codes of social etiquette did they become usable, able to allow for prediction and emotion in conversation, and take into account differences in language, culture and behaviour.[3]

Tracking and understanding conversation was for long a holy grail of the telecommunications industries, and breakthroughs in natural language analysis took place a decade later.[4] Sophisticated agents can cause problems for students, who can alter the appearance of the agent. Some students dislike them, some fall in love with them – such issues are dealt with in Debrief.[5] Thus does life imitate art: in Philip K. Dick’s novel Do Androids Dream of Electric Sheep? the hero Deckard, who hunts androids, falls in love with the near-perfect android Rachael and begins to wonder if he too is an android.

  1. [1]Anderson, T. and Whitelock, D. (2004). The educational semantic web: visioning and practicing the future of education, Journal of Interactive Media in Education  (Special Issue), 1, Introduction. Available at http://www-jime.open.ac.uk/2004/1.
  2. [2]Parkes, M.B. (1991). Scribes, Scripts and Readers: Studies in the Communication, Presentation and Dissemination of Medieval Texts.  London, The Hambledon Press, 66.
  3. [3]See Miller, C.A. (2004) Human-computer etiquette: managing expectations with intentional agents, Communications of the Association for Computing Machinery, 47(4), 30–33; Lester, J.C., Towns, S.G., Callaway, C.B., Voerman, J.L. and Fitzgerald, P.J. (2000)Deictic and Emotive Communication in Animated Pedagogical Agents, Cambridge MA, MIT Press, 123-54.
  4. [4]Patch, K. (2004) Conversational engagement tracked,TRNmag.com. Available at bit.ly/tcl2iz.
  5. [5]Yu, C., Aoki, P.M. and Woodruff, A. (2004) Detecting user engagement in everyday conversations, in 8th International Conference on Spoken Language Processing 2 Jeju Island, Korea, 1329–32.