Best's Review

A.M. BEST'S MONTHLY INSURANCE MAGAZINE


ADVERTISEMENT
ADVERTISEMENT

At Large
Targeted Questions

What to ask to determine how much artificial intelligence could help your firm.
  • William H. Panning
  • August 2018
  • print this page

William H. Panning

William H. Panning

Don’t be too confident that your firm’s data is error-free.

Your CEO has asked you and other managers to meet with a respected consulting firm to discuss whether they could apply new tools, based on artificial intelligence (AI), to your firm’s data, and so increase profits. What questions should you and your colleagues ask to evaluate their proposal? Here are a few recommended ones, each followed by the reason for asking.

How much data would this require? AI software defeated the world champion of the board game Go, discovered how to play and win numerous video games, and learned to distinguish between pictures of different animals. But in each case, it learned from playing or viewing millions of games and pictures. By contrast, a firm’s customer data is typically limited to its current and past clients.

How accurate must the data be to produce useful results? In large projects for various clients, my colleagues and I were always told that the data had been thoroughly scrubbed. In reality, we found incredible amounts of missing or incorrect data. At one firm we found that thousands of customers apparently resided at regional offices. Why? Because if addresses were missing, clerks substituted the address of the insurer’s closest regional office. (Implication: Don’t be too confident that your firm’s data is error-free.)

How do you prevent overfitting? This consists of finding results that are statistically significant (i.e., unlikely to occur by chance) but meaningless in reality. If your data set has 100 variables, there will be nearly 5,000 pairwise correlations among these variables. One percent of them, roughly 50, will be “statistically significant at the 1% level” even if the data is purely random. If, using real data, a researcher finds, say, 80 relationships that are statistically significant at that level, roughly 50 of them are likely to be meaningless. But which ones? How does one distinguish between the 30 or so “real” correlations and the 50 or so “meaningless” ones? The usual method is to use one half of the relevant data to discover statistically significant relationships among variables, and then test those results by seeing which of these many relationships also occur in the other half of the data. Relationships found in both halves are more likely to be real than accidental. But there are no guarantees that this is so.

Can AI tools significantly improve underwriting decisions, where the results are not known immediately but emerge over time and can vary considerably among otherwise similar clients or properties? The AI successes mentioned earlier occurred in activities where the software is given immediate feedback: Did it defeat its Go opponent, win the video game, or successfully identify a picture as that of a cat rather than a dog? AI methods that rely on feedback are called supervised learning. For problems where immediate feedback is not possible, AI uses unsupervised learning methods, which are far more difficult. The results are less clear, and require creative interpretation as well as science.

Can AI analysis identify potentially profitable business that our firm is failing to attract? Don’t bother to ask. Absent data from other firms, the answer is no.

AI tools can be useful, but are sometimes overly hyped. These questions, and their answers, may help you establish realistic expectations of how and to what extent AI-based tools might benefit your firm.

Best’s Review columnist William H. Panning, principal of ERMetrics, LLC, can be reached at bill@ERMetrics.com.


Back to Home


ADVERTISEMENT