top of page
Background

Designing for Trust: Autonomous Animal-Centric Robotic & AI Systems

Alan Chamberlain, Steve Benford, Joel Fischer, Pepita Barnard, Chrish Greenhalgh - Computer Science, University of Nottingham J
u Row Farr, Nick Tandavanitj, Matt Adams - Blast Theory

Introduction 

From cat feeders and cat flaps to robot toys, humans are deploying increasingly autonomous systems to look after their pets. In parallel, industry is developing the next generation of autonomous systems to look after humans in the home – most notably robot arms that might assist with all manner of domestic tasks. How might the animals and humans in these spaces engage with these and with each other? We reflect upon the role that ‘trust’ plays in autonomous animal-centric robotic and AI systems, and the ways in which we as a researchers can further understand how to design, develop, and evaluate such systems by taking a Responsible Research and Innovation (RRI) approach.

 

Notions of ‘trust’ are non-static and can be highly context specific, knowing what models of trust-based interaction to apply to a given context is not a simple task [10]; while deploying autonomous technologies in “the wild” [2] [3] which are effective and useful beyond the confines of the research lab is difficult, due to the ‘specifics’ and changing nature of the setting and context, and yet it is the real-world settings that can truly inform the design of technologies which animals and people engage. The value of understandings informed by the real-world practices is in their ability to direct the overall design of a system, allowing people to develop a deeper understanding of the practices of both animals and humans in their natural settings, leading to a more responsible and grounded approach to design, that respects [11] animals, and their interactions in a day-to-day context. Understanding, designing for, and negotiating ‘Trust’ is complex, particularly in contexts where animals, humans, and intelligent systems (including Robotics, AI, & IoT) come together in a social context, in which bonds are created, friendships develop, and mutual care plays a part in the relationship between people and animals. Systems need to be reliable, trusted, and adaptable to work in settings that are ‘critical’ such as the provision of food, and with this in mind we need to be aware of the safety critical dimensions of the context and its relationship to the way that one might develop software systems for such a setting [9]. It is key that people understand the multi-facetted dependent nature of any systems that weaves together online services, autonomous systems, robotics and IoT into the real world, particularly when these are use used in such a way to enhance and support the mutual bond that exists between animals and humans in a domestic setting. In this setting we are responsible for looking after and caring for animals, passing on parts of this responsibility to an autonomous system means that we are reliant upon the said system doing its job, whether this is opening a cat-flap, feeding or intelligent health-based interventions – we are entrusting this role to a technical system. Baier [1] writes, “the custody of these things that matter to me must often be transferred to others, presumably to others I trust. Without trust, what matters…would be unsafe”, which, in the context of this philosophical text means that we transfer trust, we entrust, give away control and responsibility to others we reason about it (in our case we are interested this trust transfer to intelligent robotic systems), without this sort of “trust” and reasoning about potential outcomes there is a risk (things are “unsafe”) to the things that we care about, things that matter. Negotiating this in potentia ‘risk’ is difficult when one starts to think about the subtle ways that humans interact with companion animals, the subtle personal nature of this interaction (we know our companion animals) and the tacit ways that this occurs. How can we design for ‘Trust’ in this context, when even at a high-level this appears to be so situationally dependent and how do we inform a system about the idiosyncrasies of these interactions? This notion leads us on to think about “what matters”, the value of “what matters” and how we invest in decisions and reason about entrusting human responsibility to technical systems in the context of ACI. In order to address this, we need to start to engage with a range of issues relating to safety, prioritisation, provenance and importance as we start to navigate around the zoo-socio-technical landscape when designing trustworthy autonomous systems. In such cases we need to start to think about the reliability of a system - security, how acceptance occurs, the visibility, intelligibility and the explainability of a system, both in respect of ACI and HCI.

 

Trust and belief are foundational, forming the basis for observable mutual negotiations and interaction between humans [6] and between humans and animals [2]. In the context of this research, we will start to unpack the ways in which humans and animals engage with autonomous technologies, and the role of trust in these settings. The addition of autonomous systems adds another dimension and underlying complexity to these interactions as illustrated by the ongoing research explorations and approaches to design that examine Trust, AI, and Autonomy in a ‘mundane’ setting [7], but how might we approach this from an ACI perspective and what areas of Trust might we focus upon to inform researchers, policy/lawmakers, and designers?  

 
We bring together a range of researchers, artists, and designers to explore the role ‘Trust’ and autonomy in such systems. Exploring how we might deal with intelligent systems in a more animal-centric way, where the social setting and our relationship with animals can help us start to develop ways to take RRI into context and appreciate issues pertaining to agency and start to develop participatory approaches that engage both animals and humans in design and how might we do this in a responsible beneficial way. The notion of participation in design has been previously discussed in the literature [5][8] and we would like to expand upon this to further develop responsible approaches to design, particularly in the context of ‘Trust’, ACI and autonomous systems. We are at a point where the implementation of a range of autonomous technologies, from robotic systems in the home, to robot toys and feeders (which we refer to at the start of this piece) and autonomous speech-based systems which can intelligently change the voice of the system (perhaps mimic the sound of a human or animal). It is these technologies that are leading to the development of a range of technology-based enrichment activities and services (as have been previous explored in the ACI research domain see http://www.zoojam.org), which could positively impact upon a whole range of ACI contexts, but how do we get people to engage in these debates and thee issues that surround them, how can we effectively design for such complex contexts in the real world and can the Arts play a role in these debates? This workshop aims to look at some of these interdisciplinary issues and asks people think about how we approach a fast-approaching world, where autonomous-intelligent systems are becoming the norm, where such systems may already be embedded into our lives in ways that we don’t see.

 

The workshop will be hosted by the Cat Royale project, an ongoing artistic exploration of cat-human-robot interactions being led by the artists Blast Theory as part of the UKRI Trustworthy Autonomous Systems Hub. We will share our story so far of engaging with questions of responsibility, trust and autonomy while inviting other participants to share theirs with a view to establishing an agenda for future research.

​

ACKNOWLEDGMENTS

This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/V00784X/1] UKRI Trustworthy Autonomous Systems Hub

​

Workshop Chairs  - University of Nottingham
General Chair - Alan Chamberlain 
Technical Chair - Dominic Price

Logistics Chair - Pepita Barnard
Communications Chair - Eike Schneiders
 

​

REFERENCES

  1. Annette Baier. 1986. Trust and Antitrust. Ethics, 96(2), 231–260. http://www.jstor.org/stable/2381376

  2. Alan Chamberlain, Andy Crabtree, Tom Rodden, Matt Jones, and Yvonne Rogers. 2012. Research in the wild: understanding 'in the wild' approaches to design and development. In Proceedings of the Designing Interactive Systems Conference (DIS '12). Association for Computing Machinery, New York, NY, USA, 795–796. https://doi.org/10.1145/2317956.2318078

  3. Andy Crabtree, Alan Chamberlain, Mark Davies, Kevin Glover, Stuart Reeves, Tom Rodden, Peter Tolmie, and Matt Jones. 2013. Doing innovation in the wild. In Proceedings of the Biannual Conference of the Italian Chapter of SIGCHI (CHItaly '13). Association for Computing Machinery, New York, NY, USA, Article 25, 1–9. https://doi.org/10.1145/2499149.2499150

  4. David Goode. 2006. Playing with My Dog Katie: An Ethno-Methodological Study of Canine/Human Interaction, Purdue University Press.

  5. Clara Mancini. 2013. Animal-computer interaction (ACI): changing perspective on HCI, participation and sustainability. In CHI '13 Human Factors in Computing Systems (CHI EA '13). Association for Computing Machinery, New York, NY, USA, 2227–2236. https://doi.org/10.1145/2468356.2468744

  6. Christian Meyer. 2019. Ethnomethodology's Culture, in Human Studies; Dordrecht Vol. 42, Iss. 2, 281-303.  DOI:10.1007/s10746-019-09515-5

  7. Matthew Pilling, Paul Coulton, Tom Lodge, Andy Crabtree, and Alan Chamberlain. 2022. Experiencing Mundane AI Futures, In the Design Research Society Conference 2022 - DRS2022: Bilbao, 25 June - 3 July, Bilbao, Spain. https://doi.org/10.21606/drs.2022.283

  8. Charlotte Robinson, Ilyena Hirskyj-Douglas, Patricia Pons. 2018. Designing for Animals: Defining Participation, International Conference on Animal-Computer Interaction - Georgia Tech., Atlanta, United States - https://participationaci.wordpress.com, webpage accessed July 2022.

  9. Ian Sommerville. 2011. Software Engineering, 9th edition, Addison Wesley

  10. Lauren Thornton, Bran Knowles, and Gordon Blair. 2022. The Alchemy of Trust: The Creative Act of Designing Trustworthy Socio-Technical Systems. In 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22). Association for Computing Machinery, New York, NY, USA, 1387–1398. https://doi.org/10.1145/3531146.3533196

  11. Van Patter, L. E., & Blattner, C. 2020. Advancing Ethical Principles for Non-Invasive, Respectful Research with Nonhuman Animal Participants, Society & Animals, 28(2), 171-190. https://doi.org/10.1163/15685306-00001810

  12. Zoo Jam - http://www.zoojam.org, webpage accessed July 2022.

bottom of page