Last week Chatham House published a paper by Kate Jones which discusses how a human rights framework should guide regulatory and other responses to online disinformation and distortion of political debate.

'Online Disinformation and Political Discourse: Applying a Human Rights Framework' outlines the ways in which digital technology is increasingly being used to further political aims which can then distort our democratic processes. Techniques include the creation of disinformation and divisive content; exploiting digital platforms’ algorithms, and using bots, cyborgs and fake accounts to distribute this content; and micro-targeting on the basis of collated personal data and sophisticated psychological profiling techniques.  

Regulation of these techniques has been slow to keep pace with their developments and usage.  International human rights law, designed to protect individuals from abuse of power by authority, provides a framework that should underpin regulatory and other responses. Controls of the online environment should balance the interests at stake appropriately by reference to human rights law.

This includes tackling online threats and abuse directed towards those who stand for election or engage in public debate of any kind.  Human rights law protects the right to participate in public affairs and to vote, therefore states and digital platforms should ensure an environment in which all can participate in debate online without undue intimidation.

160 people attended the launch of the paper at Chatham house and the research was referenced in a Times article on regulation of digital campaigning, and a Forbes article on the negative impacts of social media on democracy.