InsurTech Magazine May 2023 | Page 87

“ We ’ re likely to see more Chief AI Officers or similar roles that bridge the understanding between tech , ethics , and legalities ”

KATIE SIMMONDS MANAGING ASSOCIATE , WOMBLE BOND DICKINSON
for example . Predictive AI will be the next step once AI becomes more mainstream , but this area is still in its infancy . Once AI becomes more widely adopted and models have captured sufficient levels of data , we will start to see real-world applications of predictive AI .”
When we think about AI , one of the main insurance-led applications that comes to mind is around customer service . The technology can be used to automate firstpoint-of-contact for customer enquiries , freeing up human customer service agents to handle more complex queries or to work on other tasks within the business requiring judgement or discretion , for example . According to Lombard , this represents a shift away from customer service interfaces that work for the insurer and a shift towards customer service that works for the customer , improving overall satisfaction .
Fujitsu ’ s Meghana Nile elaborates : “ Customers want an omnichannel experience , which is much more achievable with the help of AI . It makes self-service claims processing much easier , dramatically improving customer experience . But insurance can feel like quite a personal experience to many and there are times where there will be more complex claims and customers expect the ‘ human touch ’.
“ According to HubSpot , 40 % of customers who couldn ’ t find someone to help them with their problem are still having issues with the product or service . So , it ’ s clear that when implementing AI , insurers must strike the balance between digital and human interaction ; not everything should be done by a machine .
“ Most important , however , is that AI in insurance is ethical . To be beneficial to both customers and the insurers , AI models have to be fair , transparent , and explainable . As AI evolves , becoming more complex , the companies that develop and provide the technology – and all stakeholders involved in AI – must practise ethics in each process .
“ If insurers aren ’ t careful , unconscious bias will creep into AI if the algorithms are set up by a narrow group of people . If there ’ s a lack of diversity among data scientists – the experts that develop and test these AI models – then they ’ ll only further reinforce unconscious bias . And that is why we must consciously build solutions that constantly look out for these biases , preventing them from manifesting and causing harm .”
How important is data input for predictive modelling ? Those well-versed in AI will be familiar with the acronym ‘ GIGO ’, which stands for ‘ garbage in , garbage out ’. This refers to the principle that , if your AI algorithm is using poor data , it will return poor results . For example , if an insurer is using AI to identify
insurtechdigital . com 87