Best's Review



Insurtech: Lemonade
A Different Approach

Lemonade execs: Want better relationships with insureds? Change the business model.
  • Kate Smith
  • January 2019
  • print this page

Daniel Schreiber, chief executive officer, and Dan Ariely, chief behavior officer, Lemonade, said they set up the young insurer as a system that favors the economic interests of policyholders, whether or not they file claims. Schreiber and Ariely spoke with AMBestTV at InsureTech Connect 2018, held in Las Vegas.

Following is an edited transcript of the interview.

Daniel Schreiber

If I can increase the level of trust—it doesn’t have to be absolute, but instead of having a presumption of guilt you have a presumption of innocence—then I can let bots pay you in three seconds.

Daniel Schreiber

Very early on Lemonade created the role of a chief behavior officer. Why did you do that?

Schreiber: One of the founding notions within Lemonade was the idea that people hate insurance. They deeply distrust it, and they buy it when they have to. We found that intellectually fascinating. It's a social good. It's helping people in their hour of need. It's a community coming together. It's an economic necessity. It has all the makings of something that people would love, and yet, the perception couldn't be further away from that. You look in the Urban Dictionary and the definition of insurance is, “A promise to pay later that is never fulfilled.”

Early on, we were intellectually curious just about why is that? Dan had spent 10 years researching what is it about people that they can behave so well in one set of contexts and be trusting, and in another set of contexts they'll behave poorly and be distrustful. We said, “Ultimately, insurance is about insuring people, not property. We better crack that nut open and take a look inside.”

Ariely: I'll say that from my perspective, I did meet lots of people from insurance companies. I remember a discussion about consumer fraud. They basically said, “Everybody cheats us. Everybody exaggerates their claims. People are trying to find tricks to get us to pay all kinds of things we shouldn't be paying.”

I said, “How do you deal with this?” They just say, “We just increase the price. As long as we can price it up, we don't care.” This was not dealing with the problem. It was not dealing with the problem of saying, “We've created a system that has very little trust, lots of antagonism.” Then the actuarial side says, “Let's just price it correctly,” rather than, “let's fix it.”

How have you baked in some behavioral economic principles into your business model?

Schreiber: Actually, just about everything that we do is informed by the research that Dan and other people have done in this field. It starts with a business model. One of the reasons we founded a new insurance company, rather than just putting a technological facade in front of a traditional insurance company, is that the teaching that Dan had for us is that the business model itself creates the sense of a zero sum game.

If you claim $1,000, and I don't pay it, I'm $1,000 richer. You're $1,000 poorer. That creates distrust. Just about everything that creates distrust is manifested in insurance—an asymmetry of information, a win-lose value proposition, conflicts of interest.

Our business model deals with it in a fairly fundamental way, which is that we have capped the amount of money that we can make. We take a flat fee. When you make a claim of us, our flat fee is going to be un-impacted by whether or not we pay that claim. At first approximation, we're indifferent to whether we pay you or not. If there's money left over at the end of the year, we're not going to keep it, it's going to go to a charity of your designation. Then if there's insufficient funds, reinsurance will pay for it.

Now, the system has to work overall. It can't year-after-year fail, but it removes the immediate conflict of interest. That's a radical departure from traditional insurance models.

Ariely: Let me describe one of my favorite trust-creating experiments. There's a waiter and there are four diners. The waiter comes to the first diner, and he says, “What would you like?” The diner says, “I want the fish.” The waiter says, “You know what? The fish is too expensive. Why not take the chicken? It's cheaper and better.” Then, we measure. Is the first diner going to take their advice, but also, are the next diners going to take their advice, and is the table as a whole going to take the recommendation for wine? The answer is yes.

Case No. 2. The waiter comes, first diner. “What would you like?” “I want the fish.” The waiter says, “Don't take the fish. Take the lobster. It's only four times more expensive, but really, really good.” How likely now is anybody in the table to take the recommendation? Zero.

Now, what's the difference between the first waiter and the second waiter? The first waiter said in a very clear way, “I am willing to do something that is in your best interest, even if it costs me money. I'm recommending something that is cheaper. The restaurant will make less money. I will make less money, but I'm doing it, because I value your utility.”

Now, the second waiter. Maybe the lobster is a great idea, but you don't know if they're working for you or against you. Imagine you have an insurance company, and they say, “You should increase the coverage on your home.”

Who are they working for? Is it for you or for them? You never know. They always make recommendation that is in, maybe for sure, their best interest, maybe also in mine. What was very important for us is to be very clear with people to say, “We are the first type of waiter.”

We are basically tying our hands. We are creating a model where we would always have your best interest in mind, and we're not going to make recommendation that you will be confused if they are benefiting us or you. They will benefit you.

Schreiber: Maybe I can just add one more thing, which is sometimes, when we talk about this model, folks in insurance take this the wrong way, and are offended by it, as if we're impinging the good reputation of the good people there.

Really, this is about game theory, and an understanding of how incentives work. It is because we think that our moral fiber is in no way superior to anybody else's that we feel we have to fix the game. The problem isn't the players.

If we all took a step to the right and were in the same seat, we would behave in the same way, and engender the same distrust. It's really about starting from scratch, thinking about the incentive structures, and creating a game where we don't have the capability of taking advantage of you, because that money is out of our reach. We would be as tempted about anybody else.

Ariely: When we started this—actually, when I started the research on dishonesty—it was very easy to say, “There are some bad apples out there.” There are bad apples, good apples. We're good apples. Life is solved because we're good apples. Actually, let's just remind people that we are good apples, and just trust us. It's not as simple.

Conflicts of interest are corrosive. You take a good apple, and you put them in the situation where they see reality in a distorted way. They get more money, and they can't see reality correctly. You see it everywhere. You know that in a soccer game, people are blinded by their motivation.

They see reality in a certain way. When it comes to big conflicts in the world—Brexit, the Israeli-Palestinian conflict, whatever it is—people have a perspective. Glasses are tinted by their own motivation. That's how conflicts of interest work. We said, “We can create a system that has conflicts of interest, and then come to our customers and say, just trust us. We are really good people.”

Or we can say, “Let's redesign the system. We don't trust ourselves. We don't trust ourselves. If we were in a bad situation, we would behave just as badly. Let's finish the situation this way.” Another way to think about it is to say, “I don't want to live in a house that has chocolate cake in the fridge all the time. I have good intentions. I know I'll fail. Let's just not have the chocolate cake out there.”

Dan Ariely

We are basically tying our hands [by taking a flat fee]. We are creating a model where we would always have your best interest in mind.

Dan Ariely

Starting from scratch, you can build this type of model. For a legacy insurance company, is there a way to course correct?

Ariely: Very tough. I can't think about a simple way to do it, because sometimes, people think it's just the business model, but there's so much connected to it. There's so much that is built, step over step over step, to basically consolidate that business model.

For a new company, you name it, to just say, “We're going to change,” or, “We'll give money to charity.” They can all of a sudden give 10% to charity. It will not matter to their structure. It's a business model in a deep sense of the way of saying there's a collaborative notion, and insurance is a social good.

You start by saying, “Insurance is a social good.” We need to be in the social good business models to get it to live to its potential. At the end of the day, especially if you think about people at the low-income level, people who don't have insurance are just getting into very negative spirals.

Something bad happens, you don't have insurance, and things just start deteriorating. For us, it's very important to try and get as many people as possible onto the insurance platform. We think that's the only way to do it.

Trust and technology. How do you blend the two, or how do you use technology to build the trust?

Schreiber: For us, it really is hand and glove. We talk about our company as being founded on behavioral economics and artificial intelligence. When it comes to claims, we play claims algorithmically pretty often. About a third of our claims are paid by a bot in three seconds.

A precursor to that is increasing the level of trust. One of the fascinating things in insurance is, there are very few lovable brands. USAA is a standout brand that manages to engender an emotional connection and true loyalty. Everybody else is saying, “Switch to me, and you'll save $700. Fifteen minutes can save you 15%.” These are the standard promotional ways that insurance companies act. If you want to talk about creating a trusting and trustworthy company, you've got to think back to the basics, and do what Dan was talking about, about changing the very foundations.

Which is why we're an insurance carrier, and not a technology vendor to existing companies. Once you get there, then technology can accelerate things dramatically. If I can increase the level of trust—it doesn't have to be absolute, but instead of having a presumption of guilt you have a presumption of innocence—then I can let bots pay you in three seconds.

You're delighted by that. You're fascinated. You share it with your friends, and you start creating a virtuous cycle. With about 2% of the claims that we pay, people call us up afterward, and they say, “Hey, my girlfriend found the laptop. How do I return the money to you?”

You've taken stances that have been, perhaps, controversial, including a stance on guns. Were you worried, as a business, that that could have a negative on your business, and has it?

Schreiber: We absolutely were worried about it. We're in Vegas [in 2017] when there was a horrible mass shooting in Vegas, which triggered for us a rethink. We write home insurance policies, which means that we are both protecting people's guns and protecting people from things that they do with their guns.

We felt that it's incumbent on us to be thoughtful about that. We're not allowed to be neutral about that. If we're going to be the guys insuring people if they do something reckless with their gun, or if the gun is stolen, then we've got to be thoughtful about it.

We took a position that was controversial. We got a fair amount of hate mail. Somebody wrote in that they hope that the next mass shooting takes place in our offices. It wasn't something that we were sanguine about, but it's something that we were proud of, and thoughtful about.

We think it's appropriate for brands like us to take a position. We did the same thing about coal and polluting industries. Seventeen of the 18 hottest years on record happened since the year 2000. In 2017, insurance companies like ours suffered more losses than in the history of insurance.

Yet, U.S. insurance companies are the second-biggest funders of polluting industries. They have almost half a trillion dollars invested in coal and other polluting industries.

How do I take premiums from you to protect you from wildfires, from hurricanes, and other weather-related incidents, and then invest those premiums in the very things that are responsible for the disasters that I am protecting you against?

Again, it wasn't something that we felt we could be indifferent to, or not take a position on. It is controversial, but it's a source of pride for us.

Data is changing how we underwrite risk and allowing companies to underwrite individuals very specifically. How might that affect trust?

Ariely: I think it can. There's a risk there. Partially, it's how transparent, how you communicate, and what else do you do with this? A lot of people in the insurance world are concerned about pricing. We are also concerning about prevention. Think about this little piece of paper, digital information we send people about what they should be doing. The same thing, we want to apply to prevention. Let's think about car insurance. If somebody has a car wreck, even if we pay them for it, we really haven't paid them for it. I was badly injured a long time ago. I love to think about injuries. We don't really compensate the human misery. If we can do things to prevent that, that's big. If we come to people and we say, “Give us more data. We want to price discriminate,” no. If we say, “Give us data,” we're going to do two things. We're going to help you identify weak points in your own lives, where you should maybe do things differently, try new things, and so on.

It will also come up with a price difference, but it's not just the price difference. That, I think, is a very different story about trust. Again, I'm going to use an example from driving. Differences in patterns of texting and driving create about 1,300 difference in probability of getting into an accident.

That's huge, and that's just driving. In driving, we have data. What about having theft? What about losing things? What about having different accidents at home? We don't really have the data to try and do that.

My guess, it's not going to be very different than texting and driving. It's not that we are going to price discriminate better, and you are just going to do the same thing. No. We are going to understand you better. You're going to understand you better. We'll help you understand yourself better.

Pricing will be part of it, but the really important thing is you would learn how to prevent loss. By the way, the other thing about this is that if we help people prevent loss, I think there's a real loyalty that comes with that.



Kate Smith is a senior associate editor. She can be reached at

Back to Home