Faculty of law blogs / UNIVERSITY OF OXFORD

Trustworthy AI and Corporate Governance

Author(s)

Eleanore Hickman
Lecturer, University of Bristol Law School
Martin Petrin
Dancap Private Equity Chair in Corporate Governance, Western University

Posted

Time to read

4 Minutes

There are many things we do not know about the future of AI usage, including the ways it will impact corporations and their governance. Nevertheless, it is reasonable to believe that, like in many other aspects of society, there will be impact and it is likely to be significant. Despite limited knowledge about the shape of change, future safety and fairness require us to make preparations.  Embracing the challenge of preparing for the unknown are 'The EU’s Ethics Guidelines For Trustworthy Artificial Intelligence', which set out a number of principles upon which the use of AI should be based. In our recent article, we consider how the practical application of these principles might affect corporations and their management.  In doing so we highlight a number of questions and issues that need further thought, a few of which we discuss here.  The article concludes that, although the principles provide a useful starting point for the establishment of trustworthy AI in business, more specificity is needed regarding how they will harmonise with company law rules and governance practices.

The concerns addressed by the Guidelines about the use of AI derive from four objectives: respect for human autonomy, prevention of harm, fairness and explicability. The aim is for  AI to be lawful, ethical and robust.  From this core, the Guidelines posit seven key principles that AI systems should meet in order to be deemed trustworthy, which fall under the following headings:

  • human agency and oversight;
  • technical robustness and safety;
  • privacy and data governance;
  • transparency;
  • diversity, non-discrimination and fairness;
  • societal and environmental well-being; and
  • accountability.

Applying the Guidelines' principles to a corporate governance context, two modes of AI impact are taken into account. Firstly, AI’s direct usage on the board. This will fall somewhere on a scale of autonomy where, at one end AI is an assistant to the board, and at the other, there is a fully autonomous board. The second mode of impact is indirect, i.e. AI used by the corporation and overseen by the board.

It is easy to see why human oversight of AI is one of the key ethical principles of the Guidelines. Without human input in decision making, dangers could easily arise. But achieving effective oversight is challenging. By its nature AI is highly technical so it is unlikely that boards, in their current forms, will be capable of adequate oversight. A considerable re-think about board level skills and/or a raft of new training will be necessary. In relation to usage of AI on the board, clearly a fully autonomous AI board is incompatible with the Guidelines.  The point on the scale of autonomy at which AI involvement is compatible is unclear. The more AI autonomy is scaled back, the smaller the likely efficiency gains may be.

The principle of diversity, non-discrimination and fairness presents several concerns. These include AI’s potential to derail progress in gender and ethnic diversity on boards. Diversity of perspective is often used to justify a business case for diversity. If AI is to create any enhancements or efficiencies in management, the dataset it processes should reflect a variety of perspectives. To some degree this obviates the business case for board diversity.  The board may need to revert to a social justice justification to promote diversity amongst its human constituents. Although this is arguably a more sustainable and fair justification, to date it has been largely shunned by business and policy. The risk is that, without the business case, board diversity will stall or reverse.

Also potentially problematic from a diversity perspective is AI’s use of data. The information fed into AI will govern the outcomes that it produces but it is well-known that historic data may reflect certain biases.  The ways in which biased data produces biased outcomes is often surprising and with an increase in the use of AI, boards will want to find ways to counteract this effect in order to remain within the Guidelines.

From a more traditional company law perspective, fundamental questions regarding the interrelation between the ethics guidelines and corporate purpose arise.  The Guidelines' requirement to take into account institutions, democracy and society at large does not fit easily with a need for AI to have clear and specific goals. Neither does this requirement align easily with the (albeit qualified) duty of managers to act in the best interests of the shareholders in shareholder centric jurisdictions such as the UK. On the plus side, advanced AI usage is potentially highly beneficial, provided it is able to balance conflicting interests with accuracy. However, it is inconceivable that it can be aware of every matter that is of concern to humans, and if it is not aware of them, they can’t be accounted for.  Some values or concerns may be sacrificed in favour of expected benefits in other areas. It is not clear that following the Guidelines will ensure these trade-offs are made safely or fairly.

At a more fundamental level, to the extent AI is making managerial decisions in the future,  there is a difference between the substantive rationality (decision making with the use of discretion) employed by humans and the formal rationality (the logical solution produced from the processing of a dataset) employed by AI. Whether gains in accuracy are worth the lost ability to consider ‘what ought to  be’  is a matter for debate.  Hand in hand with this is the social danger of losing knowledge to machines because, considered individually, there is limited incentive to spend years learning what a machine can learn in minutes.  This takes us full circle  to the first principle of human oversight because, with no humans who understand the decision processes of AI, effective oversight becomes impossible.

The regulation of corporate governance seeks to establish checks and balances over powerful corporations. AI can be considered a new power, one that is emerging fast as countries compete to be at the forefront of development, but few checks and balances are presently in place. Corporations are in a position of responsibility in respect of AI usage and the interplay between corporate governance regulation and the Guidelines, or future versions of the same, are important to consider. The issues mentioned above, and a number of others considered in our article, represent a bid to start a  conversation today that might pre-empt some AI based corporate governance concerns tomorrow.

Dr Eleanore Hickman is a research associate in the 3CL Centre for Corporate and Commercial Law in the University of Cambridge Faculty of Law

Martin Petrin is a Professor at University College London and Western University

Share

With the support of