The pace of change that technology is exerting on every aspect of life is immense. Statistics show that society has experienced more disruption in the past ten years than the previous 50 years combined.
The scary part about this is that the tech juggernaut will keep rolling forward, gaining momentum.
I pointed out before that when I was a journalist in the financial services sector, there was a lot of concern surrounding technology and the rise of robo-advice in the insurance industry. Now, almost every aspect of a policy is governed by tech. But is this revolution going to stop here? Turnaround Talk published an article on 11 April which pointed out that, in the future, we may see robo-directors and important board decisions being made by artificial intelligence.
The liabilities associated with this are immense. I recently spoke to Michael-John Damant, a Director at Genoa Underwriting Managers, to learn more about this liability.
As a Director yourself, do you feel that the days of AI directors will become commonplace?
The concept of AI directors or AI-driven decision-making in the corporate world is an interesting one. AI technology has the potential to automate certain aspects of decision-making, analyse data at scale, and provide insights to support strategic planning and operational efficiency.
In some industries, we have already seen
the use of AI systems to optimize processes, analyse market trends, or make data-driven recommendations. For example, in finance, AI algorithms can assist in investment decisions, and in customer service, AI-powered chatbots can handle basic inquiries.
When it comes to the role of a director in a company, however, there are several factors to consider. Directors are responsible for setting the strategic direction of the organization, overseeing its operations, managing risks, and making critical decisions that impact the company’s success. These responsibilities often require a deep understanding of the business and industry dynamics and the ability to navigate complex situations.
While AI can provide valuable insights and support in decision-making, the final responsibility and accountability still lie with human directors. Human judgment, experience, human relationships and the ability to consider a wide range of factors, including ethical considerations, stakeholder interests, and long-term vision, are crucial in a directorial role.
In summary, while AI technology can be a powerful tool to support decision-making and enhance operational efficiency, it is unlikely to completely replace human directors in the foreseeable future
From a liability point of view, what are some of the red flags when it comes to AI directors?
There are several red flags that insurers may take into account due to the unique risks associated with artificial intelligence. Listed below are some potential concerns:
- lack of human oversight: If an AI director operates autonomously without human intervention or oversight, it may raise concerns about accountability and decision-making processes. Insurers may be wary of situations where critical decisions are made solely by an AI system without any human involvement or review;
- data quality and biases: AI systems heavily rely on data for training and decision-making. If the data used to train an AI director is of poor quality, incomplete, or biased, it can lead to inaccurate or unfair decision-making. Insurers may scrutinize the data sources, data management practices, and data quality assurance processes to ensure fairness and accuracy;
- technical malfunctions and errors: Like any technology, AI systems are prone to technical glitches, errors, or malfunctions. If an AI director’s malfunction leads to incorrect decisions, financial losses, or legal issues, insurers may be concerned about the potential liability and the impact on the insured organization; and
- the adoption of AI directors is a relatively new area, and the insurance industry is still developing its understanding of the associated risks. Insurers may have varying perspectives and requirements based on their underwriting guidelines, industry expertise, and regulatory frameworks.
It would be advisable to consult with insurance professionals who specialize in AI and emerging technologies to get accurate and up-to-date information regarding liability insurance for AI directors.
How would Genoa approach underwriting this type of risk?
Underwriting the risks of an AI director would involve a careful assessment of various factors. Here are some key considerations that we may take into account during the underwriting process:
- risk assessment: the insurer will evaluate the risks associated with the AI director. This may include analysing the specific industry in which the AI director operates, the potential impact of its decisions, and the likelihood of errors or malfunctions;
- development and testing: the insurer will assess the development and testing protocols followed during the creation of the AI director. This may include evaluating the expertise and experience of the development team, the methodologies used, and the testing procedures to ensure the system’s reliability, robustness, and adherence to industry standards; and
- legal and regulatory compliance: insurers will assess whether the AI director complies with relevant legal and regulatory requirements. This may involve evaluating the system’s ability to adhere to industry-specific regulations, ethical guidelines, privacy laws, and any other applicable legal frameworks.
The specific underwriting criteria for AI directors are still evolving, and insurers with expertise in AI and emerging technologies are likely to have a more tailored approach to underwriting these risks. Organizations seeking insurance coverage for their AI directors should work closely with insurance professionals who have a deep understanding of the associated risks and can provide guidance on appropriate coverage options.
When it comes to looking at AI Directors from a risk management perspective, is the sole risk the fallout from poor decision-making, or are the traditional technology risks of cyber threats and hacking other risks that need to be covered from a liability perspective?
From a risk management perspective, the risks associated with AI directors go beyond just poor decision-making. While poor decision-making can certainly have significant consequences, there are other risks that need to be considered, including traditional technology risks such as cyber threats and hacking.
Here are some key risks that may need to be covered from a liability perspective:
- poor decision-making: one of the primary risks associated with AI directors is the potential for poor decision-making. If an AI director makes incorrect or biased decisions that result in financial losses, legal liabilities, or reputational damage, it can expose the organization to liability claims. Insurers may assess this risk by evaluating the accuracy and reliability, of the AI director’s decision-making processes;
- cybersecurity and data breaches: AI directors typically rely on large amounts of data, and this data may be vulnerable to cybersecurity threats and data breaches. Hackers or malicious actors may attempt to gain unauthorized access to the AI director, compromise its functionality, or steal sensitive information. Insurers may consider cybersecurity measures, encryption protocols, access controls, and incident response plans to mitigate this risk; and
- technical malfunctions and errors: AI systems are not immune to technical malfunctions, errors, or bugs. If an AI director experiences a malfunction that leads to incorrect decisions, operational disruptions, or financial losses, it can expose the organization to liability claims. Insurers may assess the development, testing, and maintenance practices of the AI director to mitigate the risk of technical failures.
AI Directors would be a major development in liability insurance. As a major liability insurer, was this risk on your radar five or even two years ago?
The adoption of AI directors is a relatively recent phenomenon, but the insurance industry has been actively monitoring and adapting to emerging risks associated with artificial intelligence. While it’s difficult to provide a comprehensive overview of all insurance companies’ activities, it can be said that the awareness and consideration of AI directors by insurance companies have been increasing over the past five years.
Insurance companies have recognized the unique risks and challenges posed by AI directors and have been working to develop appropriate coverage options and underwriting practices. Some insurance companies have introduced specialized products or endorsements to address the liability risks associated with AI technology, including AI directors.
Additionally, insurance industry associations and regulatory bodies have been discussing and researching the implications of AI and machine learning in various fields, including directors and officers (D&O) liability insurance. These discussions aim to enhance the understanding of AI-related risks, promote best practices, and develop relevant frameworks for underwriting and risk management
Circling back to the first question, if you don’t feel that the days of AI Directors will be commonplace, to what extent will AI influence board decision-making? Like all things technology, the AI influence will never completely disappear.
You are correct that AI’s influence on board decision-making is likely to continue, even if AI directors themselves do not become commonplace. AI technology has the potential to greatly impact decision-making processes in the boardroom by providing valuable insights, data analysis, and decision support.
This can be done by the AI Director offering:
- data-driven insights: AI can analyse large volumes of data quickly and extract meaningful patterns and insights. Boards can leverage AI to obtain data-driven insights into market trends, customer behaviour, operational performance, and risk factors. These insights can inform strategic decision-making and help boards make more informed choices; and
- risk management and predictive analytics: AI can help boards assess and manage risks more effectively. AI algorithms can analyse historical data, detect patterns, and predict potential risks or opportunities. By leveraging AI’s predictive capabilities, boards can enhance their risk management strategies and make proactive decisions to mitigate potential risks.
While the final decision-making authority typically rests with human directors, AI can act as a valuable tool to enhance the quality and efficiency of decision-making.
What role has AI played in underwriting liability risks?
Whilst I’m not entirely sure how it is being used around the world, AI could play a significant role in underwriting liability risks by enhancing efficiency, accuracy, and insights in the underwriting process.
This role could be achieved by things like:
- data analysis: AI algorithms can analyse vast amounts of data from various sources, including historical claims data, financial records, industry trends, and external data sets. This analysis helps underwriters identify patterns, correlations, and risk factors more effectively. AI can assist in automating data extraction, data cleansing, and risk scoring, allowing underwriters to make informed decisions based on robust data analysis;
- automation and efficiency: AI technology enables the automation of repetitive underwriting tasks, such as data entry, document processing, and routine risk assessment. This automation frees up underwriters’ time, allowing them to focus on more complex risk evaluations and customer interactions. AI-powered systems can also streamline workflows, reduce manual errors, and enhance overall efficiency in the underwriting process; and
- fraud detection: AI can contribute to fraud detection and prevention in underwriting. By analysing patterns, anomalies, and historical data, AI algorithms can identify potential fraud indicators and alert underwriters to suspicious activities. This can help insurers mitigate fraud risks and make more accurate underwriting decisions.
It’s important to note that while AI technology has the potential to enhance underwriting processes, expertise and judgment of human underwriters remain crucial. The place for AI would then be more from a perspective of automation of routine tasks, improved risk assessment, and enhanced efficiency.
How will this change in the future?
It is expected that AI will become increasingly relevant in the future where it is expected to be able to offer:
- enhanced customer experience: AI-powered chatbots, and virtual assistants, can improve customer interactions and provide faster, more accurate responses to inquiries. AI may also assist in claims processing, automating routine tasks, and providing real-time updates to policyholders;
- advanced risk assessment and underwriting: AI’s data analysis capabilities will continue to evolve, enabling insurers to assess risks more accurately and make more informed underwriting decisions;
- claims management and fraud detection: AI can significantly improve claims management processes by automating claim intake, document processing, and fraud detection. AI algorithms can analyse claim data, identify patterns of fraudulent behaviour, and flag suspicious claims for investigation. This helps insurers streamline claims processing, reduce costs, and improve fraud detection capabilities; and
- automated underwriting and policy generation: AI can automate underwriting processes and policy generation, reducing manual efforts and speeding up the issuance of policies. By leveraging AI algorithms, insurers can streamline underwriting workflows, improve efficiency, and offer faster policy issuance to customers.
Checks and balances are key
For the business rescue and business turnaround profession, the critical issue of robo-directors is the recovery of a company that becomes financially distressed following a poor decision made by AI.
There are companies that have recovered from periods of mismanagement/poor decision-making. However, there are levels of mismanagement that companies can never recover from. Will Steinhoff ever recover? And while Comair stated that the fuel cost was the reason behind the company’s liquidation, more than one protest by employees indicated that there were mismanagement issues at the airline.
Therefore, companies may be unable to recover depending on the level of mismanagement by AI. We must start putting checks and balances in place to govern future AI decision-making at a board/executive level.