It is becoming evident that the external risk factors that can become a root cause of financial distress are fast becoming as pertinent to a company as traditional risk factors such as financial mismanagement and poor decision-making. Is AI one of them?
Speaking of poor decision-making, many businesses around the world are facing a case of double jeopardy. There is a pressing need for digitalisation; however, businesses must be fully aware of the risks involved with this undertaking, or else the ship will sink rapidly.
Risk of Disruption
The article points out that AI has the potential to disrupt industries on an unprecedented scale, making it crucial for board directors to monitor the competitive landscape and adapt accordingly. To manage the risk of disruption:
- Stay informed, and encourage management (especially the CEO) to stay informed about emerging AI capabilities and their potential impact on the company and your industry. Ask for regular briefings and updates on the most relevant AI developments and keep potential risks and opportunities on the agenda. Subscribe to industry newsletters, attend conferences, and bring in AI experts to keep abreast of the latest developments and their impact on your company.
- Promote a culture of innovation and agility that starts with the CEO. Ensure that management is fostering experimentation, learning, and adaptation to help the company respond quickly to the rapid changes in the AI landscape. AI initiatives will only succeed with direct support from the top. (See How To Implement And Scale AI In Your Organization)
- Oversee strategic planning and investments in AI-driven projects. Monitor and evaluate the progress of these AI initiatives and ensure that they align with the organization’s long-term goals and vision. AI shouldn’t be done for the sake of doing AI, it should be done to solve critical business problems or create new opportunities for growth.
Cybersecurity Risk
The article adds that AI exacerbates the challenges of cybersecurity by increasing the sophistication and volume of cyber-attacks. To address this risk:
- Understand the AI threat landscape. Stay informed about the latest AI-driven cyber threats, such as AI-generated phishing emails, deep fakes, and automated vulnerability exploitation. We are now living in an age where we can’t trust what we read, see or hear because it may have been created by a bad actor using AI. (See The Scary Truth Behind The FBI Warning: Deepfake Fraud Is Here And It’s Serious—We Are Not Prepared For An Attack)
- Ensure investment in advanced cybersecurity, including continual updates, to stay ahead of evolving threats. Request regular reports on the organization’s cybersecurity posture and AI-driven defence strategies. (See If Microsoft Can Be Hacked, What About Your Company? How AI is Transforming Cybersecurity.) Encourage management to explore and invest in AI-driven security solutions, such as machine learning-based intrusion detection systems, threat intelligence platforms, and automated incident response tools.
- Request evaluation of vendor relationships with respect to AI security. Organizations often rely on third-party AI systems or services, making it crucial to assess the safety of these external solutions. Ensure that management conducts thorough security assessments of AI vendors and integrates security requirements into vendor contracts.
Reputational Risk
The article points out that the behaviour of AI systems can lead to PR disasters if they fail to align with your organization’s values. To mitigate reputational risk:
- Establish clear guidelines and ethical standards for AI development and deployment. Work with management to ensure that they have a code of ethics for AI that reflects your organization’s values and expectations, and that all AI projects adhere to these standards. (See Google, Facebook And Microsoft Are Working On AI Ethics—Here’s What Your Company Should Be Doing )
- Request regular audits and assessments of AI systems to ensure the results align with the company’s values. Ask management to conduct third-party audits or internal evaluations to assess the performance, fairness, and transparency of AI systems (although transparency may be a significant challenge).
- Develop a crisis management plan for the board to address any AI-related incidents that may harm your organization’s reputation. Consider that each evolution of AI within the company is likely to increase the risk of an unforeseen incident. Ensure that there is a clear communication strategy to respond quickly and transparently to any issues that arise, and that the board is kept informed of any incidents.
Legal Risk
The article points out that, with regulations surrounding AI evolving rapidly, board directors must stay informed and ensure compliance. To manage legal risk:
- Track emerging legislation at the federal, state, and international levels by ensuring that corporate counsel or another relevant executive owns this task and reports frequently. Make AI legal risk an agenda item at board meetings to ensure that this is kept up-to-date. (See The EU Is Regulating Your AI. Five Ways To Prepare Now)
- Have management collaborate with legal experts to ensure that your organization is prepared for upcoming regulatory changes. Encourage management to conduct regular legal assessments to identify potential compliance gaps and develop strategies to address them. Currently, highly-regulated industries are taking action against the use of AI.
- Foster transparency in AI decision-making processes, whenever possible, to comply with existing and future regulations. “Black box” AI—where no one can tell you how AI arrived at a particular result—will (in many cases) be replaced with interpretable AI by law sooner or later. By ensuring that management develops documentation and reporting processes now, you will avoid complicated entanglements later.
Operational Risk
The article adds that AI systems can lead to operational failures if not properly managed or understood. To address operational risk:
- Request employee training and education related to AI systems. Ensure that management is providing appropriate training and resources for employees to understand and use AI technologies correctly or to not use them in certain situations.
- Ensure robust monitoring and evaluation processes for AI systems. Encourage management to establish clear performance benchmarks and track the performance of AI systems against these goals. Have management present a framework for what type of AI requires what type of human participation. (See AI Can Be Dangerous—How To Reduce Risk When Using AI)
- Support AI system resilience and redundancy. Advocate for the development and deployment of AI systems that are resilient and have built-in redundancies to minimize the impact of operational failures. Encourage management to consider investing in AI system backups, alternative decision-making processes, or fail-safe mechanisms.
Changing tides
The article points out that in the face of growing AI adoption, board directors should prioritize AI risk management as a core component of their strategic planning. By taking a proactive approach to addressing the risks of AI, boards can not only safeguard the interests of the company but also seize the opportunities that AI presents to drive innovation, enhance customer experiences, and create a competitive edge.
However, caution must be thrown to the wind. We will see the days of AI corporate decision-making at higher levels within companies. This will, however, carry some significant liabilities.
There were significant concerns regarding the impact of robo-advise in the insurance and financial advice field. However, the Financial Sector Conduct Authority put a lot of control in place that both allowed this and limited the impacts of such advice.
This is another example of how there needs to be regulatory reform in the business rescue/turnaround space. We must continue with digitalisation, and it won’t be long before we see an increased impact of AI at a board level of corporate decision-making.
We need to put systems and processes in place now that will address the risks associated with this before they place businesses into financial distress hand-over-fist.
Moses Singo is a Partner at Genesis Corporate Solutions and is a Junior Business Rescue Practitioner.