Insurers and regulators must stamp out discrimination in insurance pricing to ensure fairness for consumers, says new study

Bayes Business School co-authored research says policies can be changed to avoid unfair treatment of customers.

Insurance pricing models must change if discrimination with respect to gender and ethnicity is to stop, a study has found.

Co-authored by Bayes Business School, the report explores how protected characteristics are exploited when pricing policies, because of their association with features such as policyholders’ postcode and credit scores. These features are, in themselves, considered legitimate risk factors used by insurers when generating quotes for their customers.

It is illegal in many jurisdictions to use protected policyholder characteristics when calculating insurance prices, although there remain concerns that such characteristics may still be impacting insurance quotes indirectly. For example, policyholders’ postcodes are commonly used in the calculation of their insurance premium, but this information could be an effective proxy for determining ethnicity.

While standard insurance pricing models would not explicitly use ethnicity as an input, its association with policyholders’ postcodes means that ethnicity may still be impacting prices. This phenomenon is often referred to as “proxy discrimination”.

The report finds evidence of proxy discrimination, using data from a motor insurance portfolio. Specifically, it shows that, based on standard technical pricing models, young members of one minority ethnic group would be charged premiums that are higher than they would be if ethnicity had no (direct or indirect) impact of insurance prices.

The co-authors propose a new method for removing proxy discrimination from insurance pricing models. The proposed method takes risk predictions for policyholders that are based on all available characteristics, including protected ones such as gender or ethnicity. Subsequently, these characteristics are “averaged out” from prices.

The authors – Andreas Tsanakas (Bayes Business School), Mathias Lindholm (Stockholm University), Ronald Richman (Old Mutual Insure) and Mario Wüthrich (ETH Zurich) – show that this process mathematically uncouples protected characteristics from other variables (such as policyholders’ postcodes or credit scores). This leads to insurance pricing that avoids proxy discrimination, while enabling the “safe” use of these variables for risk discrimination in pricing models. In recent court action in the United States, Washington regulators have attempted to ban the use of credit scores in insurance pricing models due to the potential for proxy racial discrimination and investigations into possible civil rights violations are underway.

Adopting such an approach – counterintuitively – requires the use of protected characteristics to generate discrimination-free prices, since without such information insurers are unable to compensate for proxy discrimination. Responding to this challenge, the authors developed an artificial intelligence-based method which only requires the collection of sensitive information from a small subset of policyholders.

Professor Tsanakas, a leading expert in actuarial science and risk management at Bayes Business School, said that it should be the responsibility of insurers to demonstrate that discrimination is not a material issue in their portfolios and, if it is, to adjust their prices. Nonetheless, a policy framework for this process is currently lacking, such that insurers do not have clear regulatory signal on how to manage this problem.

“We have proposed a practical, adoptable method for removing the effects of discrimination from pricing models by removing the proxying of characteristics,” said Professor Tsanakas.“Proxy discrimination is a real issue, and it should be addressed in pricing. But to address this problem insurers need to collect information on protected characteristics, which in turn raises privacy concerns.

“It is important that strict protocols are introduced by regulators about how such information is collected and used, and how this process is explained to policyholders.”

Additionally, the authors show that different notions of fairness put forward in the literature, focusing on the disparate impact of pricing methods among different demographic groups, are in potential conflict with the requirement to avoid proxy discrimination.

Co-author Ronald Richman, Chief Actuary and AI-expert added: “Issues of fairness and discrimination in insurance pricing are of great current importance and raise substantial challenges for the insurance industry.

“To enable progress for the insurance industry, policymakers and professional associations need to take a stance on what notion of fairness is appropriate in the specific context of insurance pricing.

“Our findings can offer practical solutions, while highlighting the dilemmas that policymakers need to navigate, while providing tools to the industry to adjust pricing while maintaining competitiveness.”

Ends

Notes to editors

  1. The proposed method for addressing proxy discrimination is introduced in this paper.
  2. How do we know that proxy discrimination is a real issue? We analysed a confidential real data set in this paper and observed the presence proxy discrimination. We are currently doing additional research to develop methods that to quantify the materiality of this problem across an insurance portfolio.
  3. The potential conflict between notions of fairness and addressing proxy discrimination is proved here.