One way to convince regulators to relax auto rate rules

By Greg Meckbach | September 27, 2019 | Last updated on October 30, 2024
4 min read

Artificial intelligence may be able to help auto insurers convince regulators to relax certain prohibitions on rate restrictions, the founder of an A.I. vendor suggests.

In Ontario, insurers are prohibited from using several factors – such as certain at-fault accidents, credit history or how long the client has lived in their current home. The client’s gender and where they live are fair game.

“I understand the rationale for that, but the idea with A.I. is that we can pick the bad apple out of any group of individuals,” Gary Saarenvirta, CEO of Daisy Intelligence, said in an interview on the use of A.I. in claims and underwriting. “If we can convince the regulators that the technology can do this in a non-biased way, then maybe they would be open to relaxing some of those regulatory limitations on underwriting.”

Saarenvirta was asked by Canadian Underwriter about regulations that limit the criteria that auto insurers may use to set rates. He said A.I. can help insurers drill down more deeply into underwriting data and spot “bad apples” within risk pools. This would provide for more segmented — and hence, fairer — analysis of the true risks within a company’s portfolio.

“It’s pretty standard things you are able to use – age, gender, geographic area, car usage, how much driving you do, previous driving history,” Saarenvirta observed of the way things stand now because of regulation. “It’s very vanilla standard.”

Could regulators be convinced of an A.I. alternative?

As a technology vendor, Toronto-based Daisy would leave the lobbying to industry groups. “I am not the person who would go fight that battle,” said Saarenvirta. “It’s a battle that would have to be fought on a different front.”

Daisy’s products include software designed to let underwriters analyze new business applications. A different Daisy product is designed to flag claims with unusual characteristics that could indicate fraud.

With both claims and underwriting, the Daisy software looks at data sets, compares them to other data sets, and looks for anything unusual. The idea is that a human worker poring through the data cannot do anywhere near the volume of work that the computer can do.

If you use A.I. for auto underwriting, you could create a pool of people that look similar at first glance; however, the software would look at attributes that indicate whether or not the person is likely to have an accident, suggested Saarenvirta.

He describes the current approach to auto underwriting as one that looks backwards. For example, if a group of young males had more accidents in the past than a group of young females, then the regulator lets the insurer charge the young males more. “With A.I., you could try to spot which of the young males would be higher risk,” said Saarenvirta, adding the same could be true for territory.

The idea is that highly segmenting the data would mean that not everyone who happens to fall within a certain broad range of category — gender, for example, or a territory — would necessarily be deemed a high risk. That would lead to fairer pricing, based on the true risks, and not relying on broad, abstract categories.  The broader categories of territory and gender have been challenged publicly.

In Ontario, a bill to end territorial ratings was sent this past March to the Standing Committee on Finance and Economic Affairs. Bill 42 bill proposes to prohibit insurers from charging rates based on factors “primarily” related to postal code or the area code of their phone number. The private member’s bill was sponsored by Parm Gill, a backbench MPP for the Progressive Conservative party, which has had a majority government since June of 2018.

The use of gender as a rating factor has been challenged in court. In 1984, the Ontario Human Rights Commission ruled that insurers may not charge young men higher rates. But that ruling was quashed by the Divisional Court, a decision upheld in a divided 1992 decision by the Supreme Court of Canada.

Ontario Human Rights Commission v. Zurich Insurance Company arose when Michael Bates took Zurich Insurance to the Human Rights Commission, in 1983, complaining Zurich denied him the right to contract on equal terms without discrimination and of the right to equal treatment in services.

The Ontario Human Rights Commission’s appeal was limited to the situation as it existed in 1983 – not in 1991, when the Supreme Court of Canada heard the case.

The majority found that as of 1983, there were no reasonable practical alternatives for an insurer to base rates on age and gender. Although statistics started being collected in Ontario 1985, it “would only have generated meaningful statistics in 1988,” Justice Sopinka noted in the ruling released in 1992.

“It may well be that there now exists a statistical basis for determining insurance premiums in a non-discriminatory manner,” wrote Sopinka.  “The insurance industry must be allowed time to determine whether it can restructure its classification system in a manner that will eliminate discrimination based on enumerated group characteristics and still reflect the disparate risks of different classes of drivers.”

In dissent, Justice Beverly McLachlin noted that Ontario’s superintendent of insurance had been pushing since 1977 to stop discriminating against young males.

 

Greg Meckbach