Intelligent systems comprise one or more AI technologies embedded within a larger systems architecture. They are being utilised in more and more scenarios, including autonomous vehicles, smart home appliances, retail, healthcare and manufacturing. There are many scenarios within which an intelligent system (or its developers) might need be held to account, for example system failures, or auditing and validation of decision making. It may be difficult or even impossible to know how the system is making decisions, what went wrong in the case of a failure or who should be held responsible. How can we benefit from the superhuman capacity and efficiency that such systems offer without giving up our desire for accountability, transparency and responsibility? How can we avoid a stalemate choice between forgoing the benefits of automated systems altogether or accepting a degree of arbitrariness that would be unthinkable in society’s usual human relationships?
The RAInS project is run jointly across the Universities of Aberdeen, Cambridge and Oxford, and aims to realise processes by which these systems can be made accountable, by developing an accountability fabric for use by a variety of stakeholders. The project will use computational models of provenance – as a substrate for enabling trust; such a mechanism facilitates transparency and accountability by recording the processes, entities and agents associated with a system and its behaviours-supporting verification and compliance monitoring.
For further information
Visit the RAInS project website.