Technology
Artificial intelligence: time to worry, or just to think?
Captive International takes a look at how artificial intelligence might impact the world of insurance.
Captive International takes a look at how artificial intelligence might impact the world of insurance.
“These things are not live. These are not conscious, but they are very intuitive.” Matthew Queen, The Queen Firm
CIMA
Artificial intelligence (AI) is constantly in the headlines at the moment, with one particular variant, ChatGPT, taking up a large amount of space in newspapers and magazines.
ChatGPT is a form of AI that can be used to absorb a large amount of information and write a script based on the topic at hand. Used correctly it can be a useful tool to help summarise a complicated topic for someone looking to explain something and as such it has caused a lot of uncertainty over how some professions, such as journalism, will be able to cope with it.
There is of course a downside. Used incorrectly it can be used to cheat at homework or, in one memorable case that hit the headlines, it can be used to write a formal legal submission—and in the process make up the legal citations referring to past cases that supported it, resulting in judicial fury.
How does this new tool impact insurance and captive insurance in particular? Charlotte Rowlandson and Karishma Brahmbhatt, senior associate and counsel, respectively, at legal firm Allen & Overy, told Captive International that at a high level, if you take the insurance industry’s approach to engagement with insurtech more generally as a proxy for adoption of AI technologies, you might expect to see this following a few years behind the banking industry.
Allen & Overy’s sense from discussions with industry and insurtech clients is that, as for many industries, adoption is largely in the proof of concept/testing phase, with teams being tasked with a mandate to explore both internal development projects and external procurement. Notwithstanding this, Rowlandson and Brahmbhatt said that the scope for adoption of AI use cases in insurance is considerable throughout the insurance value chain.
Given the prevalence of information silos that exist across legacy systems (and in some instances, reliance on paper-based processes), the insurance industry offers numerous areas of low-hanging fruit where digitisation, particularly where combined with AI tools, has the potential to deliver cost efficiencies.
Marcus Schmalbach, chief executive officer of Ryskex, told Captive International that one central aspect of AI is machine learning (ML) which can help its user to create or identify patterns in data and give insights into the greatest use of that data. According to Schmalbach, ML can be of great benefit to a captive for the general insurance aspects.
Schmalbach points to an important aspect of tools such as ChatGPT: “It’s a fantastic tool, but it still makes mistakes. It’s fascinating, but you will never have, as some people expect, the opportunity to say: ‘ChatGPT, what will be my premium for earthquake in Tokyo next week?’, or ‘what are the hurricane underwriting guidelines?’. The answer is always: ‘I can’t do that, I’m not able to make predictions. I’m not able to give you realistic data on that’.
“But then, and this is very interesting, the machine says: ‘If you are interested in how we can underwrite this, this is what you should take into account’ and it gives you a list of very important things you should have in mind. It may cost you thousands of dollars just to get the list.
“So, it has an impact, but the final decisions and the final underwriting will never be done by ChatGPT.”
“Establishing appropriate systems and controls around these use cases will be key.” Charlotte Rowlandson, Allen & Overy
Constant evolution
The human element was also mentioned to Captive International by Matthew Queen, owner of The Queen Firm and chief executive of Sherbrooke Corporate, who pointed out that while AIs such as ChatGPT can absorb the information, it does not calculate. But, he points out, it does have its uses.
Underwriting memoranda on individual risks can be produced very fast in the captive insurance space, Queen said, adding that conceivably, if someone trained an AI with enough human support on data choice, you could probably automate a good chunk of a feasibility study.
If you know how to calculate loss runs, and if you have some sort of a database where you can store them, and have an objective way of reviewing the data, you can then compare that with the known commercial rates. And you can spit out a feasibility study in relatively short order.
According to Queen, the fears that AI will doom large parts of the jobs market by automating are largely baseless, due to this inability of present AI to calculate. But there is a major caveat: AI is evolving all the time and does occasionally spring surprises, even on its owners.
“These things are not live. These are not conscious, but they are very intuitive, in terms of their ability to work with someone like you and me,” said Queen.
“Earlier this year it was announced that one AI had developed the ability to translate Bengali, based on a relatively small number of words. This baffled its owners, who had not programmed that ability into it.”
“We firmly believe that ubiquity of AI is not an ‘if’ question, but a ‘when’ question.” Ashwin Kashyap, CyberCube
CIMA
Controls needed
Allen & Overy told Captive International that insurers and insurance intermediaries are increasingly exploring the use of AI tools in underwriting and pricing, portfolio risk management, claims assessment and fraud detection. This is in addition to the more business-neutral use cases being explored by insurance companies, such as use by the legal teams, or for marketing or software development.
Establishing appropriate systems and controls around these use cases will be key, said Rowlandson and Brahmbhatt, and will involve overarching governance framework with clear designation of accountability within the organisation, mechanisms for assessing each use case (for example a requirement to prepare AI impact assessments and/or data protection impact assessments [DPIAs] for each proposed use case), monitoring and record-keeping. Some insurers are adapting their existing DPIAs to introduce AI considerations, such as addressing the ethical aspects of the AI being proposed.
“The pace of development in AI technologies is accelerating,” Rowlandson and Brahmbhatt said. “Given the explosion of media attention in recent months, alongside the EU’s draft AI regulation entering into the final stages of the legislative process, AI adoption is no doubt now a key component of board level discussions around digitisation and technological innovation.”
According to Allen & Overy, firms will need to establish their own internal appetite for adopting these nascent technologies in the context of inevitable increased regulatory scrutiny and potential uncertainty as to how existing regulatory frameworks should be interpreted in the context of AI use cases—building up expertise in legal and compliance departments will be important in facilitating institutional confidence in these technologies going forward.
The firm understands from discussions it is having that companies are increasingly turning their minds to establishing more formal governance processes in relation to the use of AI technologies, including in relation to procurement processes where information security is a key area of concern.
Related to this, involving what is new and unpredictable technology, cyber risk analytics firm CyberCube is urging the insurance industry to pay particularly close attention to AI. Ashwin Kashyap, co-founder of CyberCube and chief product officer, said that AI is truly transformational to the world at large, and impacts all industry verticals, including insurance.
“In our opinion, it is as big as the cloud, the mobile phone, and other transformational technologies that we’ve seen over the past several decades,” Kashyap said. “As a result, we need to pay close attention to what it means to the cyber insurance market. From CyberCube’s perspective, we firmly believe that ubiquity of AI is not an ‘if’ question, but a ‘when’ question.
“And when that becomes reality, you should expect a regime change in terms of what the cyber threat landscape would look like.”
It’s worth pausing to consider one particular point about humanity’s quest to develop technology in new and different ways. Just because we can do something, should we? Or should we pause for thought and think about implications?
Share this page
Image: Shutterstock.com / PopTika