This post first appeared on Risk Management Magazine. Read the original article.
Artificial intelligence (AI) officially became mainstream when, in 2011, IBM’s Watson defeated several human competitors on the game show “Jeopardy!” This was significant, as Watson was not connected to the internet and was required to parse through complex and subtle language nuances to arrive at the correct response. Overnight, the social consciousness realized the potential impact of AI and how quickly the technology was becoming realized.
Since then, businesses all over the world have invested in and experimented with AI, and for good reason. Reduced labor and operational costs, lower turnover rates, fewer employee injuries, and increased speed and quality of production all point to the disruptive impact AI will have on business. But with any new technology comes new and unforeseen risks. And with these emerging risks come new challenges that must be tackled by the stakeholders of the technology and the end users.
Today, IBM Watson is much more than an impressive linguist and game show champion. IBM’s AI is being used by a number of the world’s biggest law firms to do legal research, combing through millions of pages of case law in a matter of minutes. In October 2016, IBM introduced new AI services for marketing, order fulfillment, supply chain management, workplace collaboration, and human resources (including recruiting and hiring). One of the most promising use cases for Watson is in the field of radiology—specifically, identifying cancerous and pre-cancerous tumors in radiology data. Watson has reviewed a lot of imaging data—30 billion images to be exact—which is putting its machine learning power to work.
Human radiologists spend years in formal education and training and must be certified before interpreting a chest x-ray. This is to improve the odds that the advice they are giving is accurate. And as a backstop, radiologists carry professional liability insurance (also known as medical malpractice insurance), in case an error is made. But what happens when IBM Watson gives bad advice? Do AI tools need to be certified to perform the same medical task as a human? And what insurance is available in case the advice given by IBM Watson is wrong and causes physical or financial harm?
Let’s say a large radiology practice enlists the help of IBM Watson to flag suspect x-rays for further study by a human radiologist. Unfortunately, Watson misses a cancerous tumor in the review, causing a misdiagnosis and, ultimately, a painful death that could have been prevented if corrective action had been taken soon enough. As a result, the estate of the deceased patient sues the radiologist for medical malpractice. But given Watson’s direct involvement in the misdiagnosis and ultimate fate of the patient, who (or what) is really at fault?
Much of the ultimate liability will depend on the contract between the radiology practice and the technology service provider. Often, technology service contacts limit the liability of the service provider to some multiple of their annual fee. Hold harmless and indemnity language can further curb legal liability. The language of this agreement will be critical in determining who pays, and how much.
The second issue deals with insurance. Once the allocaiton of liability is determined, insurers will be called upon to make the injured party whole again. And this is where things get interesting.
Radiology practices carry professional liability insurance for this very situation. Claims of malpractice are common, so underwriting data is vast and pricing models are well-tested. Insurers are able to get comfortable with the risk, since they have decades of data upon which to base their premiums. In effect, they are confident in how much they will pay out for every dollar of premium they take in. The same cannot be said for malpractice caused by an AI.
Most technology companies will carry a form of professional liability insurance, known as technology errors and omissions insurance, which is designed to cover the financial loss to a company’s customer as a result of an error or omission in the service or product supplied to the customer. But these policies are designed to cover financial loss, not bodily injury or property damage. Those damages are typically covered under a general liability policy. Unfortunately, general liability policies typically exclude professional liability, meaning no coverage for any property damage or bodily injury that is a result of a company’s services or technology products like software. And therin lies the issue. The technology errors and omissions policy excludes bodily injury and property damage, and the general liability policy excludes injury caused by technology products and software. This glaring gap in coverage is a critical business risk for companies deploying AI in any sort of medical capacity.
As with other emerging risks, including autonomous vehicles and drones, risk managers and insurers have to get creative with policy wording in order to close the coverage gaps that exist in off-the-shelf insurance products today. For the IBM Watson example above, insurers are beginning to offer coverage for contingent bodily injury under technology errors and omissions policies. But this coverage enhancement does not come free, and only limited insurance carriers are willing to add the coverage. As of now, London is the most common insurance market offering this bespoke coverage, though some U.S. insurers have as well.
As the technology matures, and more loss data becomes available for actuaries to crunch and price risk, insurance products will adapt to provide affirmative coverage for these emerging risks. But this will take time. And for now, AI is one of the many new frontiers of commercial risk management and insurance. The enormous benefits of technology such as artificial intelligence bring with it new risks that, for now, will take acute thought and creativity to identify, measure and manage. Perhaps we can count on IBM Watson, or some other AI, to guide us to the ideal solution to these issues. But that, like many of the promises of AI, remains to be seen.