The growing use of artificial intelligence to price insurance could erode basic legal protections built into the law to protect both individuals and the insurance market. To prevent discrimination and ensure fairness, regulators must begin to develop new capacities and tools.

On September 19, 2018, John Hancock, one of the oldest life insurance companies in the US, made a radical change: it stopped offering life insurance priced by traditional demographics like age, health history, gender, and employment history.

Instead, John Hancock began offering coverage priced according to interactions with policyholders through wearable health devices, smartphone apps, and websites. According to the company, these interactive policies track activity levels, diet, and behaviors like smoking and excessive drinking. Policyholders that engage in healthy behaviors like exercise or prove that they have purchased healthy food receive discounts on their life insurance premiums, gift cards to major retailers, and other perks; those who do not presumably get higher rates and no rewards. According to John Hancock these incentives, along with daily tracking, will encourage policyholders to live longer and have lower hospitalization-related costs.

John Hancock’s decision is not surprising. Life insurers that have healthier policyholders pay fewer claims and collect more premiums. More generally, it is not surprising because all insurers have a voracious appetite for data, and these interactive policies allow insurers to get all sorts of information about their policyholders. This additional information should then allow John Hancock to match more precisely the price of the policy with the likelihood of death. This, in turn, should allow the firm to gain a larger market share of the most desirable risks (those with a better chance of living longer) and mitigate the twin evils of insurance—adverse selection and moral hazard.

Some policyholders may also see real benefits from this change—those who engage in healthy activities will reap the intrinsic reward of health, pay less for life insurance, and receive cash and gifts from their insurer. But these new data-oriented policies come with significant costs for policyholders, in large part because of the technology enabling John Hancock’s new life insurance product.

Like many insurers, John Hancock is turning to artificial intelligence to price policies. Insurance companies have long priced their policies using traditional statistical methods to determine whether individual characteristics and features of a policyholder correspond to loss. In the case of life insurance, for example, an insurer might consider the extent to which demographic information like age, occupation, and health contribute to the likelihood that the policyholder will die during the covered term. Instead of using traditional statistical techniques, insurers are now using artificial intelligence to grind through billions of bits of personal data to find new characteristics and features that correspond to the risk of loss.

This change presents significant risks for consumers. In a new paper, I show that pricing with AI may create new privacy burdens, allow insurers to intentionally or unintentionally avoid legal protections for vulnerable groups, and hamstring the ability of policyholders to change insurers. For example, in the context of interactive life insurance policies, insurers using AI know more about their policyholders’ personal lives than ever before; those who have no extra time in their schedules for exercise and no easy access to healthier food options will have to pay more for insurance; and those who have proven themselves to be safe risks may not be able to leave John Hancock, because the data that show their healthy habits belong to John Hancock, not the individual.

These are significant concerns and they cut across wide swaths of the industry. Auto insurers already place telematics devices in cars to track driving habits, a dental insurer may soon price based on data gathered by electric toothbrushes, and it won’t be long before smart homes “talk” to insurers to price the risk of a loss inside the house.

“AI can easily find a combination of factors that correlate with a prohibited category, resulting in significantly higher prices for people of different races or national origins even if the AI does not use those categories or obvious proxies for those categories to price risk.”

Regulators are beginning to pay attention to these new pricing methods. New York state, for example, recently issued a guidance letter aimed at insurers pricing life insurance with AI. New York will now require insurers to prove that the algorithms they use to price life insurance do not rely on prohibited categories (such as race, national origin, or status as a victim of domestic violence) to price risk and will require insurers to explain why some policyholders receive higher prices than others.

These regulations, however, miss the mark because regulators fundamentally misunderstand the technology involved. When insurers use standard statistical techniques, humans must choose which features to consider by guessing which traits are likely to be relevant to the risk that a policyholder will suffer a loss during the policy period. Prohibiting the use of certain features and obvious proxies for those features seemingly prevents the types of invidious discrimination that one might fear about a product like insurance and provides financial security for policyholders and society more broadly.

Similarly, requiring insurers to justify why some groups pay more seems to make the pricing fairer for those who receive a higher price. If the insurer can explain the price by pointing to a particular characteristic that makes the policyholder more likely to die during the policy period, for example, then it is easier to consider whether the characteristic that causes the riskiness is the kind of characteristic upon which insurers should base their price. To further this example, if race is the characteristic that changes the price offered by the insurer, one might be concerned about the fairness of the price.

This same regulatory approach doesn’t work when insurers use AI to determine prices, because AI doesn’t work like traditional underwriting. AI doesn’t start with prior guesses about which characteristics are likely to cause a loss. AI sifts through millions of data points to find a set of features and weights for those features that best match potential losses. Preventing invidious discrimination and ensuring fairness look different in this new world. Insurers can easily tell the AI to ignore certain prohibited categories, but these prohibitions will not protect the values that states like New York have expressed in their laws. AI can easily find a combination of factors that correlate with a prohibited category, resulting in significantly higher prices for people of different races or national origins even if the AI does not use those categories or obvious proxies for those categories to price risk. In short, in this new world, given the large number of features considered by AI, it may be easier for insurers to charge vulnerable groups more for insurance without relying on the prohibited category or obvious proxies for those categories.

Requiring insurers to justify the reason why some people pay more for insurance than others—in effect requiring insurers to create a causal story—further misunderstands the technology at play. Insurers are not replicating their old pricing methods with AI; they are using AI to find new and surprising characteristics that correspond with policyholder loss. They are looking for new characteristics that will help determine whether an applicant for life insurance will die while covered, an applicant for auto insurance will get into an accident, or an applicant for homeowners insurance will have house damage. AI allows the insurers to consider many more characteristics that might be relevant to that question. While it is possible to figure out which characteristics the AI uses to price policies, the sheer number of characteristics used will make it impossible to tell a causal story that covers all of the characteristics considered.

Pricing based on risk is far from the only change in the insurance business model. As I lay out in my new article, AI is changing a number of insurance practices. In addition to the more precise pricing of risk, AI enables insurers to price policies based on the likelihood the policyholder will change carriers, not based on the risk of loss; to individualize coverages to particular policyholders; replace human insurance brokers and agents; more cheaply identify fraudulent claims; and sequence claims for more efficient payment. Even if one assumes that the data used to power the models that create these possibilities are unbiased, representative, and appropriate for the task—an assumption that is tested on a daily basis—these new capabilities could subvert consumer protections built into regulatory and judicial structures.

Using AI to create new forms of coverage might destabilize the barriers between traditional lines of insurance and erode protections built in to form regulation aimed at making sure insurance policies are clear and contain minimum protections. AI-powered robo-advisors may work around licensing and education requirements that human intermediaries like agents and brokers have to undergo. When insurers expand their use of AI in fraud investigations, insurers may have a greater ability to rescind contracts after a loss has occurred based on policyholder misstatements and make it easier and cheaper for insurers to investigate more claims, delay payment to more policyholders, and pay less to each claimant. And when used in pricing, as described above, AI may erode basic protections for vulnerable groups and create new burdens for policyholder privacy and mobility in the market.

Regulators must embrace the new world. Regulation is necessary for insurance, which is a vital but deeply flawed market—both insurers and policyholders, for example, have numerous chances to act opportunistically over the course of the relationship, and it is difficult for consumers to evaluate the insurance product at the time of purchase. But to truly understand how to regulate algorithmic insurance, regulators must first rethink why insurance is regulated in the first instance by considering, for example, whether it matters that some groups pay more for insurance than others and why.

Regulators are behind the curve. Rethinking insurance regulation is a project that will have to be carried out across states for the foreseeable future. Rather than be reactive to this new technology on a piecemeal basis, regulators must begin to develop the capacity to analyze AI architecture, call for additional transparency about rates and claims, and begin repositioning their scarce resources in line with the new priorities of algorithmic insurance.

Rick Swedloff is Vice Dean and Professor of Law at Rutgers Law School and a co-director of the Rutgers Center for Risk and Responsibility.

The ProMarket blog is dedicated to discussing how competition tends to be subverted by special interests. The posts represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty. For more information, please visit ProMarket Blog Policy