Skip to content

Chatting about Chatbots and AI


May 2023


Chatbots powered by language models that generate real-time, human-quality answers and content are the latest breakthrough in artificial intelligence (“AI”). Want to compose an e-mail, a speech or an article? Want it to be in a particular tone, serious or light-hearted, include jokes or stern warnings? Ask your Chatbot to do it. Using deep learning the Chatbot will crunch through megabytes of data and text in seconds, training itself along the way, to provide you what you need on demand. It’s like you are talking to a human being, only with a superpowered brain.

As with other eye-catching technological leaps (think cryptocurrency, blockchain and video-calls), the launch of this latest generation of Chatbots have come with hyperbolic claims of how they are going to disrupt our lives forever. With tasks that previously took human intelligence minutes, hours or even days to complete, now capable of being done faster and more accurately by AI, no industry, it is said, is safe from their disruptive force. We all need to prepare ourselves for (yet another) new normal.

In this edition of Conduct in Focus, we have given much (human) thought the handling of insurance claims. So we asked a Chatbot a question: "Can you - a Chatbot - improve the insurance claims handling process?"

“Yes” it said, with the following examples:

  1. Faster Claims Processing: Chatbots can quickly process a large volume of claims, in a fraction of the time it takes human employees. With its natural language processing (NLP) capabilities, a chatbot can quickly capture the necessary information from the customer, validate it, and initiate the claims processing procedure.
  2. 24/7 Availability: Unlike human employees who have work hours, chatbots can be available 24/7 to provide customer support. This means that customers can file their claims at any time of the day or night, and receive a quick response from the chatbot.
  3. Reduced Errors and Frauds: Chatbots can be trained to identify fraudulent claims and flag them for further review. Additionally, chatbots can help reduce errors from human agents by ensuring that all information is collected accurately and consistently.
  4. Improved Customer Experience: By offering quick and efficient claims processing, chatbots can improve the overall experience for the customers. With a chatbot, customers can easily file their claims online, without having to wait on hold or navigate complicated phone menus.

Overall, chatbots can streamline the insurance claims processing procedure, making it more efficient and faster for both insurers and customers.

So there you are!

When you see that answer appearing within seconds of having typed the question, you get a sense of how impressive these Chatbots are. But the hype is not universally positive. Tech billionaire Elon Musk considers AI to be potentially one of the biggest risks to civilization. Speculation also abounds about the number of jobs that could be eliminated.

The legal profession (as lawyers often do) has already pointed out the many legal conundrums these Chatbots pose. If copyright in literary work belongs to the author, who does copyright in content authored by a Chatbot belong to? A Chatbot, after all, is not a legal person. And who is responsible, accountable and liable for any loss or damage caused by a Chatbot’s output? What about the confidentiality and personal data implications? If a user inputs personal data as part of a question to a Chatbot, how will that data be used and stored (for example, will it be used to answer the question then re-used as part of the Chatbot’s deep learning to improve future content)? Then there’s the issue of cyber security as these Chatbots can be used for both positive and nefarious purposes. The same generative powers which can produce a speech can also create perfectly worded e-mails or believable website content as part of a scam.

Regulators like the Insurance Authority (“IA”) would also need to consider how to address the risks these Chatbots pose if they are ever to be used in the industries they supervise. It is certainly something worth thinking about, given the speed at which innovation is moving. 

One can see how these Chatbots may be used to improve financial literacy and education, with multi-layered answers potentially available at every smartphone user’s fingertips. The veracity of the answers (and the extent of their reliability) would, however, be an issue contention. (There have been anecdotal cases, of these Chatbots hallucinating facts which were not real).  

Our approach as a regulator would likely be to take as a starting point the existing standards and principles in the current regulatory framework, and how they might apply. 

If an insurer were to license the use of a Chatbot in the insurance process, the IA’s Guideline on Enterprise Risk Management (GL21) would require it to evaluate all risks associated with its use in the specific context. Certainly, comprehensive testing (under tight governance controls) would have to be done before its deployment. Clear disclosure would need to be made as to the Chatbot’s limitations, how it should be used, the data set it is trained on and how that data is stored, used and how long it is kept. Adequate risk mitigation, ongoing monitoring, reporting controls and contingency plans would also need to be in place throughout its deployment. Careful consideration would need be given to the application of the standards and principles in the IA’s Guideline on Cybersecurity (GL20) and how these are satisfied. Further if the Chatbot being utilized is provided or hosted by an external service provider, consideration would need to be given as to how Guideline on Outsourcing (GL14) would apply.

Applying the “regulated activities” regime under the Insurance Ordinance (CAP. 41) (“IO”) to these new forms of Chatbot raises more challenging issues. If, for example, an insurer or insurance intermediary deployed a Chatbot in the provision of its “regulated activities” and something went wrong, would they try and assert “it wasn’t my advice, it was the Chatbot’s”? The resounding regulatory response to this would be “don’t even think about it!” The obligations and duties in relation to “regulated activities” would apply to the legal persons deploying the Chatbot (i.e. the individuals and companies licensed and authorized under the IO) rather than the Chatbot itself (being just a piece of software, the rights to which the insurer or insurance intermediary in question would be utilizing). Responses from the Chatbot, therefore, would be the responsibility of the insurer or insurance intermediary deploying it. This is precisely why an insurer must pay heed, among other matters, to the Guidelines on outsourcing, risk management and cyber security and broker companies or agencies must adhere to the corporate governance requirements under Section IX of the respective Codes for Licensed Insurance Agents and Licensed Insurance Brokers.

The potential application of the “regulated activities” regime in the context of these Chatbots, however, highlights a deeper philosophical issue with AI. As every individual licensed insurance intermediary will know, when carrying on a “regulated activity”, he or she must abide by the conduct requirements in section 90 of the IO. These conduct requirements are ethical principles of behaviour, such as honesty and integrity, treating customers fairly and acting in their best interests. Applying these ethical principles of behaviour requires an insurance intermediary to apply judgement to discern the “right” way to proceed or advice to provide given the particular circumstances and particular interests of the particular policyholder being served. 

Is AI (at least in its development to date) capable of carrying out this kind of ethical judgement? This is certainly questionable. A Chatbot (and AI more generally) is not a self-aware, sentient being capable of emotion, empathy or human intuition. It does not have an inherent sense of right and wrong. Presented with an ethical problem by way of input, a Chatbot would solve it like a mathematical equation, crunching through its data-set to work out what societal norms apply to the situation, discerning from this where the hard dividing line existed between right and wrong and then applying this to the question. It would then emerge with an answer which (like a solution to a maths problem) it would pronounce with perfect confidence as being the right answer. 

The real world, however – where human beings need to manage the everyday risks they, their families and businesses face - does not always work in such “black and white” terms. There may be different insurance options for the customer to choose from depending on the customer’s future intent. None of these options may be perfect; each would have pros and cons that need to be weighed in the balance. Each option should, therefore, be presented with its upsides and downsides so that it can be discussed and considered and the customer can make a fully informed decision (being the aim of fair customer treatment).

Further, an AI Chatbot relies on the input – the question – being asked. But can the full context of a customer’s circumstances can be captured by such input alone? There may be things missed that the observation and intuition of a human broker or agent sitting in front of a customer and talking to him or her about life in general may pick up. 

To put it another way, one sees in the calculative, data-driven approach of AI, a methodology which seeks to view human beings, not as human, but as abstractions – collections of datapoints from which hard and fast conclusions can be drawn. History tells us that when human beings are treated as abstractions, as merely numbers on a spreadsheet, the outcome for humanity is never a positive one. 

Ultimately, ethical judgement is intertwined (in part) with the human capacity to empathize and the sense of attachment and trust that both drives and arises from that capacity. To establish trust with a prospective customer means acting right and doing good (and generally being a good and trustworthy person) in the customer’s eyes which involves understanding the challenges – the very human challenges – the customer faces every day in looking after the health, education and other needs of his or her family or business. Is AI capable of this yet? Or is only a human capable of real human empathy – putting oneself in the shoes of the human customer and thereby truly understanding and acting in his or her best interests?

These are questions that need to be asked, particularly in the context of a regulatory regime based on ethical principles of behaviour.

Perhaps it is because of these as yet unanswered questions that a strong school of thought has emerged to assert that AI should serve to augment rather than replace human capacity. For example, AI could be used to produce output that is considered as a suggestion on which the human user can then overlay his or her own observations before making a final judgement. Further, even though AI may have its shortcomings, the fact is human judgement is far from perfect. When applying ethical judgement, it is not unknown for a human to do something that is clearly wrong by any standards of morality, but to convince himself that it is being done for the right reasons (thereby justifying such action). This twisted mindset lies at the heart of many of the disciplinary cases that come before the IA.  Could the calculative approach of AI, serve to correct a human being from such “ethical fading” before it happens, by prompting him in no uncertain terms that what he is proposing to do (and seeking to self-justify) is clearly wrong?

In a world where AI is used to augment rather than replace human judgement, the human users deploying Chatbots in the provision of services remain responsible, accountable and ultimately liable for those services. It would be up to the insurer and insurance intermediary, then, when considering how to deploy a Chatbot not to consider it solely in terms of how quickly a customer can purchase an insurance. Rather, the insurer or intermediary must ensure that the principles of fair customer treatment, honesty and integrity, acting in the customer’s best interests and enabling the customer to make a fully informed decision, are upheld. That’s as it always should be!

The views expressed in this article are only those of a single author (and not the IA more generally), but it is certainly the case that technological development in the insurance sector is something the IA is keeping our collective eyes on. We certainly welcome input and views from readers on this subject.