Toby is a DPhil student at the Centre for Socio Legal Studies. He studies how the scientific community of artificial intelligence deals with risks from the technology, and has been focussing on changes to norms over what research is published.
He has a BA in Law from the University of Cambridge, and an MSc in Evidence-based Social Intervention and Policy Evaluation from the University of Oxford. Prior to coming to Oxford he worked as a judicial assistant in the Court of Appeal for England and Wales, and worked as a research volunteer at UCL's Constitution Unit, a political science think tank.
- DOI: https://doi.org/10.1145/3375627.3375815There is growing concern over the potential misuse of artificial intelligence (AI) research. Publishing scientific research can facilitate misuse of the technology, but the research can also contribute to protections against misuse. This paper addresses the balance between these two effects. Our theoretical framework elucidates the factors governing whether the published research will be more useful for attackers or defenders, such as the possibility for adequate defensive measures, or the independent discovery of the knowledge outside of the scientific community. The balance will vary across scientific fields. However, we show that the existing conversation within AI has imported concepts and conclusions from prior debates within computer security over the disclosure of software vulnerabilities. While disclosure of software vulnerabilities often favours defence, this cannot be assumed for AI research. The AI research community should consider concepts and policies from a broad set of adjacent fields, and ultimately needs to craft policy well-suited to its particular challenges.