
![]() |
How to ensure due diligence when AI is making the decisionsIn a recent article, we discussed the feasibility of companies in South Africa amending their constitutional documents to provide for compulsory reliance on artificial intelligence (AI) in corporate decision-making. ![]() Image source: Olena Yakobchuk – 123RF.com AI has several notable advantages over the human mind, especially when it comes to the ability to retain and recall vast quantities of information and to make decisions and give advice unaffected by biases and emotions. An analysis of the relevant sections of the Companies Act led us to conclude in the previous article that, as the law stands, there is nothing to prevent a company from legislating, in its memorandum of incorporation or a shareholders’ agreement, that an AI system must be consulted before a final decision is made by the board. However, the directors cannot abrogate their duty to become fully informed about corporate affairs, apply their own minds and exercise their discretion in the best interests of the company. Given the nature of AI, this issue must also be addressed if a company requires the directors to rely on AI in making decisions. Due diligence and liabilityIn terms of section 77(2) of the Companies Act, 71 of 2008, a director of a company may be held liable in accordance with the principles of the common law relating to delict for any loss, damages or costs sustained by the company as a consequence of any breach by the director of the duty imposed by section 76(3)(c), which requires a director to carry out his or her duties with the degree of care, skill and diligence that may reasonably be expected of a person —
Since well before the advent of the latest generation of AI, company boards have been relying on information collected, stored and processed electronically when making decisions. With advances in technology of all kinds, it might well be said that company directors who did not take advantage of the latest tools to assist them in coming to accurate decisions quickly were not in fact carrying out their duties with the degree of care, skill and diligence expected of them. Legal implicationsBut as AI has been developed to replicate the way in which human minds think and learn, a new issue has arisen, which impacts on the legal implications of using AI in corporate decision-making – as much as it is not possible to “read’ the thought processes by which a human being reaches decisions unless they explain it, we do not always understand the process by which AI reaches decisions. This may make it difficult for a company’s board to show that they have exercised the degree of care, skill and diligence required by section 77(2) of the Companies Act. The answer to this problem, it is suggested, is for a company to make sure that proper governance rules are in place, regulating the company’s use of AI and ensuring that the there is a chain of record keeping and accountability, by which processes are documented and explained. The EU’s Ethics Guidelines for Trustworthy Artificial Intelligence, published in April 2019, provide some useful guidance as to the considerations that should be catered for. The Guidelines propose that, in order for AI used in companies to be trustworthy, it must be “lawful, ethical and robust”. This in turn requires that the following considerations must be taken into account:
In terms of the Companies Act, the directors can never abrogate their responsibility and must be able to show that they have exercised the requisite degree of skill and care in carrying out their duties. As we are navigating as yet uncharted waters, the courts have not yet had occasion to pronounce on what would be sufficient compliance by directors when using AI to aid corporate decision-making. But we suggest that if they can show that in introducing and using AI, they implemented these guidelines, it ought to go a long way to establishing that they have complied with their obligations. About the authorIan Jacobsberg, Director Fluxmans Attorneys |