Best's Review

AM BEST'S MONTHLY INSURANCE MAGAZINE



The Last Word
Putting a New Face on Reality: Deepfakes Raise Novel Challenges for Insurers

Artificially manipulated audio and video could soon present risks to insurers. Researchers are creating tools to better segregate real from artificial.
  • Anthony Bellano
  • November 2022

In the days leading up to Russia's invasion of Ukraine, Ukrainian President Volodymyr Zelenskyy appeared on video telling Ukrainian soldiers to surrender to the Russian invaders. But various news agencies as well as the internet debunking site Snopes said the video was a fake—and an obvious one at that.

It didn't work, but Cyber Cube, a cyber analytics firm with a focus on the insurance world, is warning insurers that the danger of what it calls “deep fake video mimicry” is very real.

“Deep fake video mimicry” occurs when hackers use artificial intelligence and machine learning to create realistic images of people interacting with a video camera, using source data such as video, pictures and audio samples of the deepfaker's targets, Cyber Cube says in its report Social Engineering: Blurring reality and fake: A guide for the insurance professional.

Related: New Ransomware Worry: Insurance Can Present Solutions and Problems for Cybersecurity

Because of the increasing use of social media in society, “source data is rarely a problem for the deep faker,” Cyber Cube said in the report. But Cyber Cube says carriers and underwriters can protect themselves by understanding how the companies they work with identify risks and protect against them.

For example, some researchers are already identifying artificial intelligence that will help spot deepfake videos. Electrical and Computer Engineering Professor Amit K. Roy-Chowdhury and graduate students at the University of California, Riverside conducted a study that looked into detecting changes in facial expressions, in addition to changes in identity. The results were released in a paper titled Detection and Localization of Facial Expression Manipulations.

The system they developed would use a two-stream approach. The first is a facial expression recognition system that extracts “expression-relevant features” from the face.

The second is an encoder-decoder architecture that studies those features to detect the presence of manipulation.

The research was done using real-world data sets of entire faces. The detection for facial expression manipulation has an accuracy rate of 90% or above, Roy-Chowdhury said.

In addition to protecting themselves, insurers should explore how to respond to, and limit exposure against, attacks once they happen, according to Cyber Cube.

Related: Cyberattacks: Insurers Defend Against Ransomware

“Additionally, underwriters need to adequately charge for this cover to balance out the loss potential, not only on an individual risk basis, but for a portfolio of risk affording the coverage,” Cyber Cube said.

Roy-Chowdhury said it is important to understand what AI can and can't do.

“A lot of these AI-based methods are very, very hyped, especially when people read about them in the media,” he said.

“I think that this hype is a real issue in understanding what AI can and cannot do. This is a function of the data set. When the data changes and the quality of the data changes, I am absolutely sure performance will fall off.”


Anthony Bellano is an associate editor. He can be reached at anthony.bellano@ambest.com.



There’s So Much to Cover—Don’t Miss the Latest

Get more news stories like this delivered to your inbox by signing up for our article spotlights.

Subscribe

Back to Home