- Veale M., Binns R., & Edwards L. (2018). Algorithms That Remember: Model Inversion Attacks and Data Protection Law Philosophical Transactions of the Royal Society A,
Recent ‘model inversion’ attacks from the information security literature indicate that machine learning models might be personal data, as they might leak data used to train them. We analyse these attacks and discuss their legal implications.
- Veale M., Van Kleek M., & Binns R. (2018). Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI’18),
We interviewed 27 public sector, machine learning practitioners about how they cope with challenges of fairness and accountability. Their problems are often different from those in FAT/ML research so far, including internal gaming, changing data distributions and inter-departmental communication, how to augment model outputs and how to transmit hard-won social practices.
- Binns R., Van Kleek M, Veale M, Lyngs U., Zhao J., & Shadbolt N. (2018). ‘It’s Reducing a Human Being to a percentage”; Perceptions of Justice in Algorithmic Decisions Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI’18),
We presented participants in the lab and online with adverse algorithmic decisions and different explanations of them. We found strongly dislike of case-based explanations where they were compared to a similar individual, even though these are arguably highly faithful to the way machine learning systems work.
- Veale M. (2017). Data management and use: Case studies of technologies and governance London: The Royal Society; the British Academy. [mirror]
I authored the case studies for the Royal Society and British Academy report which led to the UK Government’s new Centre for Data Ethics and Innovation. I also acted as drafting author on the main report.