The recent evolution of corporate whistleblowing has demonstrated the capacity of effective internal whistleblowing systems to support regulatory aims. In light of this, the potential impact of fast-developing ‘RegTech’ applications on corporate whistleblowing activity has significant regulatory implications. While ‘first’ generation RegTech applications such as improved data analytics already have the capability to assist corporations to implement more efficient internal whistleblowing systems, the rise of second-generation AI-powered RegTech technologies is likely to further disrupt, and potentially transform, the practice of whistleblowing in corporations. As AI advances, internal corporate whistleblowers may be supplemented, or even replaced, by ‘whistlebots’ with the ability to report autonomously, with dramatic implications for the role of whistleblowing as a corporate regulatory device. My recent article considers the implications of RegTech for whistleblowing and asks what the future of automated whistleblowing might be.

It is now widely accepted that employees (and agents and contractors with privileged inside information) have a special capacity to correct the information asymmetries that prevent internal control systems and external regulators from uncovering wrongdoing within organisations, and whistleblower protections have increased accordingly. Given the importance of whistleblowing as a regulatory device, the potential impact of technology on the operation of whistleblowing systems in corporations offers an interesting area for investigation. RegTech has demonstrated its ability to improve corporate regulation through intelligent use of technology. A major initial RegTech contribution appears to have been its capacity to manage large amounts of data more efficiently to enhance regulatory compliance outcomes, and the advantages those systems improvements could generate in relation to whistleblowing are readily apparent. It seems likely that we will see the development of combined human/AI whistleblowing systems within corporations, enabling data analysis and algorithmic approaches to supplement human judgment and increasing levels of effective disclosure.

But there is a further, particularly interesting possibility that technology could ultimately give rise to. Could AI create autonomous AI whistleblowers or ‘whistlebots’? That is, will we see much more complete and frequent disclosures by machines, which unlike people (we assume) won’t be held back by the most crucial limitation on whistleblowing activity: the negative repercussions of blowing the whistle? If a bot is able to trawl data within a corporation, analyse patterns, discern potential problems and uncover wrongdoing, will there be any real disincentives to it disclosing that wrongdoing, of the kind a human worker might experience? We know that whistleblowers frequently suffer significantly from their decision to reveal wrongdoing in organisations. But shame, exclusion and lost job opportunities are human impacts. The calculus undertaken by a robot in deciding whether or not to make a disclosure would presumably be based solely on objective factors (such as the quality of the evidence, the potential risks of non-disclosure and the potential cost implications of an unnecessary disclosure) without any countervailing risk-weighting for potential personal repercussions. Will a whistlebot be a more effective teller of truth than the human whistleblowers we are coming to rely on in modern corporate regulatory systems, free of anxiety as to the emotional consequences?

It’s worth noting here that we need to be careful not to imbue AI with unrealistic advantages, while ignoring AI’s own problematic components in terms of fairness, accountability and transparency—including inherent bias, and the so-called ‘black box’ problem of non-transparent algorithms. A risk associated with any increased automation of whistleblowing activities is the potential for larger data sets, complex algorithms and artificially intelligent whistlebots to create worlds of such technical complexity that disclosures are buried, rather than disseminated to relevant decision-making points within an organisation. Another acknowledged problem with AI is the tendency of those relying upon it to do so even when an AI-generated response is less appropriate than a human-derived decision.

Notwithstanding these limitations, is it possible that we might now move much more quickly towards the holy grail of internal corporate whistleblowing systems—complete internal corporate transparency? That is, if an initial hybrid period of blended AI/human whistleblowing systems is replaced in time by sufficiently sophisticated and complex AI whistlebots, will all wrongdoing, poor practice and inefficiency be exposed automatically, with no role left for human disclosures? Beyond that, will the need for whistleblowing at all be removed by the advent of entirely self-sufficient, transparent corporate entities controlled by AI? If our current problems of lack of internal transparency within companies and the tendency (within some corporations at least) to hide wrongdoing are not inevitable artefacts of the corporate form, an AI-enabled future might see those issues fall away.

While the level of complexity involved in the management of whistleblowing activities within corporations is likely to ensure humans are involved for some time yet, the possibility of a technologically transformed future whistleblowing environment cannot be ignored. It may be that given whistleblowing’s particular vulnerability to the vicissitudes of human existence, the impact of technology is more significant for this form of regulation than many others.

Vivienne Brand is an Associate Professor at Flinders University.