Recently, much interest has been expressed in the introduction of artificial intelligence (AI) to the corporate boardroom including in the form of robo-director. The debate has taken a new intensity due potentially to the exponential amount of data that directors are expected to digest for decision-making purposes. The dilemma of information overload has been acknowledged in the Australian case of ASIC v Healey (2011) 83 ACSR 484 [229]. There, the Court took a stringent approach and held that the complexity and volume of data cannot be an excuse for directors’ failure to read and comprehend financial statements.

AI can be said to be an example of a new breed of risk management issue that needs to be addressed on board agendas if a decision has been made to incorporate AI into a corporate practice. This article aims to make the tentative suggestion that directors may be able to use AI without necessarily exposing themselves to personal liability.

A comparative study of Australian and UK Corporate Laws demonstrates that neither the Corporations Act (2001)(Cth) nor the Companies Act 2006 appropriately cater for the use of AI by directors. However, this article posits that while such legislations do not expressly provide for the use of AI by directors, AI can still be leveraged by directors to a company’s benefit through a company’s constitution.

Recent literature has discussed the possibility of appointing an AI tool as a director by reference to real life examples of robo-directors such as Alicia T who has been appointed to vote on management decisions. Nevertheless, the view can be taken that such a scenario is unlikely to occur in the Australian and the English context, at least in the foreseeable future. The corporate laws in both jurisdictions seem outpaced by the advent of transformational technologies such as AI. Indeed, under section 9 of the Corporations Act, a director must be a natural person. A similar stance has been taken in the UK by the Companies Act (2006), which under section 250, mandates that a director be a natural person. Consequently, while some tech proponents have advocated for the substitution of directors by AI, we believe that the setting is one of cooperation between machines and directors rather than a replacement of the latter.

The decision-making process is not always straightforward for directors who may not be knowledgeable in every area of decision-making and who as a consequence, have to recourse to the advice and information provided by others including AI. The real issue then becomes whether directors might be exposed to liability for breach of their duties of care and diligence and, of independent judgment, by reason of reliance on or misuse of AI.

Whether the use of AI will exacerbate liability issues ultimately depends on the fashion in which courts will interpret the wording of statutory provisions relating to directors’ duties.  Meanwhile, it is recommended that directors navigate the possibility of a lawful use of AI through delegation, forecasting and the limited use of AI.


Typically, boards may delegate some of their responsibilities to members of the management who are ultimately responsible for the day-to-day affairs of the corporation. As a matter of fact, directors in Australia are authorised to delegate any of their powers to other natural persons pursuant to section 198D of the Corporations Act. And while the legislative provision does not expressly allow for delegation of powers to AI, it envisages the reality that there is a myriad of ways in which delegation may be effected in the following terms ‘“unless the company’s constitution provides otherwise‘“. Unlike the Australian Corporate Law, the UK counterpart does not include a legislative provision expressly conferring upon directors the power to delegate. Instead, section 171 of the Companies Act 2006 imposes on directors the duty to act within powers and states that a director acting in accordance with a company’s constitution is a director who acts within powers. Thus, directors will not be in breach of their duty to act within powers if for instance the company’s constitution provided for the director’s ability to use AI by way of delegation or otherwise.


In the case of ASIC v Rich (2009) 75 ASCR 1, 623, the Court reasoned that, in relation to decisions made in anticipation of events that might occur in the future, there is room for error of judgment given that forecasting is a complex and uncertain process. Thus, it can be safely said that if a director relied on AI in relation to forecasting for new products, he or she is likely to be exonerated from liability as errors of judgment do not constitute a breach of directors duties under Australian corporate law.

Excluding AI from the realm of decision-making

Before directors would consider putting AI to uses other than in the area of decision-making, it is worthwhile taking a balanced approach to AI. Indeed, the extent to which benefits can be stemmed from AI, is predicated upon the extent of reliance by the board on such technology. If the board simply decides to disregard the analysis generated by AI where it does not conform to the board’s view, then the very recourse to AI might be said to be futile.  On the other hand, an overreliance on AI recommendations will sap the board of the advantages associated with its collegial setting.

If the use of AI is confined to administrative aspects of corporate governance such as meeting preparation, as opposed to work that encompasses operational and strategic decision-making, the view can be taken that directors are less likely, if not unlikely, to be held accountable by reason of reliance on AI.

In short, so far as no guidance can be obtained on the lawful use of AI by directors for want of judicial pronouncement, one cannot predict with sufficient certainty the legal repercussions of the use of AI within the boardroom. And until the parliaments in the UK and Australia agree to an overhaul of corporate laws in an effort to keep up with the times, directors in both jurisdictions are encouraged to use their companies’ constitutions to accommodate for the use of AI.

Samar Ashour is currently admitted as a solicitor at the Supreme Court of New South Wales.