Not From An Actuarial Textbook

One of the mainstays of actuarial education is thinking about the ways one might underwrite auto liability insurance. The most accurate predictor of risk should be miles driven; after all, the more you’re on the road, the more likely you’re going to get into an accident.

The problem with calculating the number of miles driven is that it’s simply impractical. You’d need an insurance company to install a mileage monitor in every car, scoffs the textbook, and that’s just too costly to do. Some day, perhaps…

Well well well, the day has arrived!

Telematics insurance relies on a databox the size of a mobile phone which is installed by the insurance company into your car. The box does not damage the car and will not affect the warranty; it uses less energy than a car radio so should not drain your car battery.

What Data Is Collected?

Data from this box is collected by GPS, enabling insurers to monitor:

  • The distance it travels at those times
  • Where the car is located
  • On what type of roads the car travels
  • Speed of travel
  • Braking behaviour of the driver
  • Direction and speed of travel before and after a collision
  • Force of impact in a collision
  • At what times the car is used

Buy High, Says Venture Capitalist

Here is a post by Steve Blank, a venture capitalist, identifying a fact:

Facebook takes our need for friendship and attempts to recreate that connection on-line.

Twitter allows us to share and communicate in real time.

Zynga allows us to mindlessly entertain ourselves on-line.

Match.com allows us to find a spouse.

At the same time these social applications are moving on-line, digital platforms (tablets and smartphones) are becoming available to hundreds of millions. It’s not hard to imagine that in a decade, the majority of people on our planet will have 24/7 access to these applications. For better or worse social applications are the ones that will reach billions of users.

Yet they are all only less than 5-years old.

Here is his inspirational conclusion:

It cannot be that today we have optimally recreated and moved our all social interactions on-line.

It cannot be that Facebook, Twitter, Instagram, Pandora, Zynga, LinkedIn are the pinnacle of social software.

All of these things are true. And here’s his opening line: “The quickest way to create a billion dollar company is to take basic human social needs and figure out how to mediate them on-line.”

I think he needs to shift to past tense, there.

The most influential personal/enterprise software companies in the early 80s were Apple and Microsoft. And who are they today?

No need is ever satisfied perfectly, but there is such a thing as a big head start. Surely it’s more likely that the Instagram acquisition represents the end of the disruptive phase of this technology trend.

Innovation comes from working on a need that you have that isn’t yet satisfied. The best itches to scratch are ones people will pay you for, obviously. And as a general rule, the market price for something is typically about as much money as someone else can make with it.

Social media is a bit different, much like newspapers, radio, TV and other advertising-driven businesses were different. These are super-scalable goods with the ability for pinpoint market segmenting. All very exciting, but their economic function is simply to make insurance more efficient.

I’d be more inclined to think that the next wave of billionaires will attack the problem of process inefficiency more directly. History tells us that this usually happens by elminating processes entirely.

My favorite disruption came soon after this:

In 1898, delegates from across the globe gathered in New York City for the world’s first international urban planning conference. One topic dominated the discussion. It was not housing, land use, economic development, or infrastructure. The delegates were driven to desperation by horse manure.

The horse was no newcomer on the urban scene. But by the late 1800s, the problem of horse pollution had reached unprecedented heights. The growth in the horse population was outstripping even the rapid rise in the number of human city dwellers. American cities were drowning in horse manure as well as other unpleasant byproducts of the era’s predominant mode of transportation: urine, flies, congestion, carcasses, and traffic accidents.Widespread cruelty to horses was a form of environmental degradation as well.

Reverse Engineers

Leaky, a car insurance comparison website, ran into a problem:

The problem? In order to compare the insurance prices you’d pay with different providers, Leaky was scraping the data directly from the insurance companies’ websites. It sounds like Traff wasn’t entirely surprised by the letters (“We understood their objections and complied with them,” he says now), but he thought Leaky would have more time to fly under-the-radar while it figured out the best way to get its data. However, the high-profile launch made that impossible, and the site went offline after four days.

The solution?

Now Leaky is back, and it’s offering price comparisons based on a new data source — the regulatory filings that car insurance companies have to file with the government. Using those filings, the company has created a model that predicts, based on your personal details, how much each insurance provider will charge.

I presume he means the rate filings insurers give to regulators (I smell an actuary in there somewhere!). This is a fascinating project but I’m pretty pessimistic.

The web startup model, as I see it, is to build something geeks love, piggyback on the free advertising in the startup press and wait to get bought out by someone who has the platform to actually bring your product to the masses.

Leaky is offering no product, though. They’re offering replica pricing. Oh, but it’s so close to the real thing!

That means Leaky is no longer getting its prices directly from the providers, but Traff says the new model is making predictions that fall within 3 percent of the actual prices.

First lesson in stats: means mask the tails of the distribution. There’s plenty of wiggle room in 3% average deviation (if that’s what he means) to make this product completely useless.

Car insurance is not unlike car manufacturing. I remember reading an interview with Carlos Ghosn where he was lamenting that the only way to make money is to have huge scale in auto manufacturing and the only way to get that scale is to kill your margins.

Online platforms, like manufacturing plants, are a colossal capital outlay. As soon as it’s up insurers need to pour money into advertising to get people to the site. Sure you’re cutting out the broker, but you need to pay Google and network TV to get the word out and promise (cross our hearts) that your deals are actually cheaper.

And the real cheap deals only come occasionally as a carrier grasps for market share. Leaky can’t predict that from the rate filing.

So the only way to improve on the existing model is to compare real quotes from real insurers. Online players killed the broker a long time ago, they aren’t going to let him back in now.

Flummoxed By Florida No Fault

There’s change a-happening in the Florida auto insurance market.

Auto insurance is expensive for Floridians. The reason is that they file a lot of expensive claims, more than most. Floridians do this because… well that’s what I’ve been thinking a lot about lately.

First I’ll misquote Bastiat:

Claims Fraud is the great fiction through which everybody endeavors to live at the expense of everybody else.

Ok, let’s swipe some graphs from the indispensable III to illustrate the problem (source here and here).

Exhibit A:

So there’s a problem with auto insurance. Got it. Why?

Well, Florida is a No-Fault state, which means that beneath a certain threshold ($10,000 in this case) you claim on your own insurance policy when you get in an accident regardless of who hit whom. Everyone is pretty focused on the No-Fault aspect of the problem.

And there’s evidence of a problem. Here’s a graph detailing the growth in claims frequency and severity for No-Fault in Florida:

And newspapers have been going bananas down in FL, decrying the Florida No-Fault “Fraud Tax”. Catchy, non?

I’m not completely sure what a “Fraud Tax” is (I haven’t found any published methodology for calculating it anywhere), but here is the III’s view:

The combined impact of rising frequency and severity of claims is driving up the cost of pure premium, which is defined as the premium needed to pay for anticipated losses without considering other costs of doing business. The only reasonable explanation for this dramatic rise: no-fault fraud and abuse.

Even given that you accept that claims frequency and severity are increasing in Florida, I’d say that’s a pretty weak assertion.

They get stronger as the report continues.

Insurers also report suspected fraud to the National Insurance Crime Bureau (NICB), an insurer-funded, nonprofit organization of more than 1,000 members, including property/casualty insurers. The NICB is the nation’s leading organization dedicated to preventing, detecting and defeating insurance fraud and vehicle theft. The NICB gives a closer review to claims that are considered questionable and investigates them based on one or more indicators of possible fraud. A single auto insurance claim may be referred to the NICB for several reasons, and these “questionable claims” are flagged because they possess indicators of:

  1. Staged accidents
  2. Excessive medical treatment
  3. Faked or exaggerated injury
  4. Prior injuries (that are unreported in the new claim) Insurance Information Institute
  5. Bills for service not rendered
  6. Solicitation of the accident victim(s)

A single claim may contain several referral reasons. Questionable claims involving staged accidents surged 52 percent in 2009. For 2010, early estimates suggest an even larger increase.

And the kicker is this graph:

No fault is increasing quite a lot. But EVERYTHING is increasing, isn’t it? And how about #2, there, Bodily Injury? Well Bodily Injury is actually where the story is, in my mind. That’s the At-Fault coverage that extends above the $10,000 cap on No-Fault. You need to go to court and sue people and stuff for that.

What’s more, the Bodily Injury insurance market is 2.5x the size of No-Fault in Florida. BI is the 800 pound gorilla. Why isn’t anyone talking about it if it’s in the dumps, too?

Chris Tidball has an interesting analysis (and is now on the blogroll!):

The problem in Florida is substantial. First, the threshold for determining whether a party may sue has been watered down by the courts over the years, meaning that virtually any injury, irrespective of how minor it actually is, can be adjudicated, even if the true interpretation of the tort threshold says otherwise.

Secondly, a person is able to sue for any percentage of damage for which they were not at fault. Even if a person is 99.9 percent at fault, they are able to sue for damages.

None of that is No-Fault and I’m not sure how it’s related to the organized staged accidents and No-Fault Fraud. Are we addressing the wrong problem?

Ok, give me your hand and let’s walk slowly through this regrettably dense graph I put together with SNL data on the Florida market.

The solid lines are the written premium levels (left axis – see how much higher At-Fault is?) and the dotted lines are reported loss ratios (right axis):

Note that there is a bit of a basis mismatch in the data presented. The loss ratios are reported losses over earned premium while the premium is written premium, which is a more responsive indicator of market pricing.

The grey region is a classic market turn. Claims costs go up massively, insurers lose lots of money and premiums respond after a lag. It happened in No-Fault and it happened in At-Fault.

This time is different. No-Fault is playing that movie over again but At-Fault doesn’t seem to be, in spite of the increase in staged accidents noted above. What are we to make of this?

Some possibilities:

  1. The problem isn’t fraud, which appears to be affecting both No-Fault and At-Fault similarly without a similar impact on loss ratios;
  2. Fraud incidence is higher in At Fault but fraudsters are less successful when they need to go to court.

One observation on the graph above: In 2007, Florida’s No Fault law expired FL was At-Fault only for 3 months. But the loss ratio for that year was higher!

What’d I’d really like is to find a natural experiment in a state that modified its No-Fault laws. The only example I can see is in Colorado, which repealed its No-Fault system in 2003.

Here’s what happened:

At-Fault loss ratios dropped a bit immediately, but the drop persists!

Remember the purpose of No-Fault was to lower expenses. I’d imagine that claims expenses and overhead have also risen on the At-Fault book to deal with increase in smaller claims.

Does that mean that we should expect a higher expense ratio on the At-Fault FL book once No-Fault reform comes into play? That would suggest an advantage to carriers with efficient back offices…

The Real Housewives of Actuarial Science

I read this press release in my inbox this morning announcing that the Society of Actuaries (SOA) was launching a P&C designation, normally the province of the Casualty Actuarial Society. My half-attentive first thought was that the SOA (life actuaries, mostly) and the CAS (P&C actuaries, mostly) were deepening their partnership, perhaps some day to merge.

Jim Lynch has a different take: SOA declares war on CAS

The question I have is: why should anyone care? These organizations already collborate on exams, so it’s not like the SOA is going to weaken the examination gates and let all the ‘riff-raff’ in. Designations are purely about signalling and prestige except, as Jim Lynch notes, when it comes to signing off on Actuarial Opinions, which the SOA P&C designation can’t support, as far as I can tell.

So it’s all about passing hard exams. Pedants will no doubt quibble about curriculum minutiae or “which one is harder”, but the bottom line is that nobody cares enough about this to spend the time wondering which designation to puruse. My prediction is that the SOA’s initiative will either merge with the ACAS or fizzle out.

If these organizations want to grow they need to do better than copy each other.

Science (?) And My Insurance BS Test

Richard Feynman defines science as the study of nature and engineering as the study of things we make. I like that logic and it makes the idea of an insurance company hiring a Chief Science Officer faintly ridiculous. Science today means ‘using tools that scientists use’.

Anyway, I have a test for the degree to which an article on insurance is BS or not. It’s the Climate Change Test. If the article or interviewee mentions climate change as a problem they want to think about in connection with insurance rates, they’re probably full of it.

My point is that big politicized science questions have no place at an underwriter’s desk: identifying claims trends is fine, but don’t dress the discussion up in some topic du jour just to pretend to be talking about something ‘people care about’. That’s pure, irritating status affiliation.

Well guess what:

MB: For the present we’ll be organized such that the operational analytics will continue to reside in the business units. On one end of a continuum is the traditional loss modeling; on the other end we’ll be responding to things like climate change in partnership with institutions such as the RAND Corporation. On a scale of one to ten, the familiar operational analytics may be a “one” and collaboration with RAND might be a 10. The sweet spot for the office is probably between four and 10. I envision that the science team will support the businesses in questions that have been asked but not addressed because of immediate burning issues or haven’t been asked in the most cohesive way.

Jim Lynch is puzzled about whether this is an actuarial role or not. It sure is. In most companies, C-suite folks all have ridiculously busy jobs so can’t focus on data mining and statistical analysis. But most companies don’t employ hundreds of highly trained statisticians to think about these problems every day. AIG does.

Anyway, what’s his strategy? Go fancy:

MB: Commercial and personal property insurance is largely about low-frequency, high-severity risk. The industry has tried with limited success is to model that risk through traditional analytic techniques. However, there remains a huge amount of volatility associated with an insurance company’s finances. We hope to explore ways of thinking about risk questions differently, approaching them from a different angle while leveraging relevant data. It’s more than a matter of using traditional and even non-traditional statistical analysis; it’s about bringing game theory, possibly real options theory and more broadly about reshaping the approach fundamentally to gain new insight into how to manage claims and better understand low-frequency, high-value events.

He’s been an internal consultant in insurance for 10 years. I’ll be surprised if he can come up with ways of out-analyzing the teams of actuaries AIG employs.

*Bad writing award for this line from his CV:

Creating and leading the team challenged to inculcate science driven decision making into an organization that has achieved great success by making heuristic decisions on the backbone of its sales force.

Write Dead Cat ILWs (a year later)

It’s a crazy time of year for us reinsurance brokers. The days just before 1/1 feature little ‘work’ in terms of hours spent making stuff like submissions, analyses, meeting schedules, etc. They do feature, however, a load of stress as the negotiations for deals incepting at 1/1 reach their fever pitch.

Anyway, in my spare moments this week I’ve sought refuge from the casualty market and swapped my actuary hat for my catastrophe analyst hat.

I have an ongoing fascination with inflation rates. I’m not entirely sure what they are, but wow do they sure exist. Have a look a these figures (swiped from Swiss Re’s sigma reports) on catastrophes deflated to their original nominal values by the CPI, which Swiss Re uses to trend their losses to present day dollars. I’m going to argue this is a stupid idea.

There are two trend lines below.

One, the red, is a linear fit of the ln of the CPI level. (Always use the log of a growth rate to pull out the compounding effect*)

The other line is my nominal catastrophe cost data. Notice that the slope is about double that of the CPI. Obviously some of that is due to the fact that there are WAY more outliers.

Remove those (arbitrarily), though, and some effect remains.

The CPI is a funny thing – a strange mixed bag of goods and services. Occasionally I dig into the CPI and one day I hope to to see if I can figure out what a good basket that predicts these cat cost levels might be.

I’m missing Non-US data, of which there is some in the source files I used. I kicked what was there out because they’re presented in USD each year (at that year’s FX rate) and inflated using the US CPI. Blagh. PPP is weak weak weak and even in theory only applies to general monetary inflation, which is completely different to building cost inflation plus the hundred other possible non-monetary sources of claims inflation for catastrophes+.

Ok, the next bit of analaysis, which is what I really wanted to do. Here’s the question: how good are the estimates of the total cost of these disasters at year-end following the loss? Here’s the answer: pretty good.

Each year, Swiss Re report the Industry loss size and, amazingly, the nominal values rarely fluctuate.

There are two notable exceptions. First is Katrina, which jumps big-time, from 45bn to 65bn. The reason for this is that Swiss started including flood losses in this total only a year later. The second exception is Wilma, which was notorious at the time for being an under-reserved loss. We were down to the ‘W’ in the hurricane alphabet and some suspect that insurers were being willfully blind to preserve capital/face so they could better handle Katrina.

These estiamtes are used heavily in a market for products called ILWs (Industry Loss Warranties). These work like this: pick a level (say 20bn US Wind) and collect if the industry payout exceeds that amount. Lower attachments (10b, 5b, etc) means higher probability of loss and so a higher price.

What’s more is that there’s a market for covers called ‘dead cats’, which supply coverage for a catastrophe after it happens. For example, when Hurricane Katrina hit, the loss first loss estimates were something like 20bn (this being a week after the loss). A prudent insurance company might look at that loss and say, wow, I think it’s going to be much higher than that. They then go to the market and buy cover agains the deterioration of the loss estimates of a catastrophe that’s already happened (a ‘dead catastrophe’ or dead cat).

This analysis suggests dead cats are a great write. A year out, anyway.

* A few weeks ago, my mom said the following to me (paraphrased): “I remember so little about high school math. I learned what I needed to to get through it and into a good University, but who really cares? Like, does ANYBODY actually USE logarithms?”

+ For example, catastrophe models use one proxy called ‘demand surge’, which purports to measure the non-linear increase in costs for rebuilding things when the normal supply of local materials/labor is fully occupied. Economies can only be stretched so far. Remember people traveling from all over the country to New Orleans to build houses after Katrina? Well, the housing supplies market before all that supply showed up is what demand surge measures.

Lessons in Pitching

How Fab.com got its backing:

From a fundraising standpoint, providing access to the RJ data basically said to the VC’s, “here we are, here’s the data, we’ve got nothing to hide, take a look and decide for yourself if you want to pursue investing in Fab.” Effectively, we turned the pitching on its head. Since the RJ data updates several times per day directly from our database, it was many times more powerful than providing powerpoints and excel spreadsheets. This was the real stuff, auto-updating! And, since RJ enables all the data to be downloaded into excel, the analysts at the VC firms were able to do all of their own analysis on the front end of the investment process.

Now I’ll break away from this to describe what I do for a living: I raise capital for insurance companies.

Insurance companies typically aren’t financially strong enough to absorb the risks associated with the policies they write. Every year, then, they renew reinsurance arrangements with third party companies that give them a boost. This process has all kinds of effects:

  • In a fantastically capital-intensive business, scaling up becomes relatively trivial, if you can demonstrate that you’ll make money doing it.
  • The plain fact that reinsurers ‘give the pen’ to companies that can bind them to financial obligations without itemized signoff means the scrutiny during the renewal process is often intense. This acts as a powerful mechanism for dispensing best practices throughout the industry
  • Bringing several capital backers up to speed on what you’re doing consumes valuable management time. Every year.

This last point is where brokers come in: we’re middlemen that facilitate this process by being a negotiation agent, knowing the market and performing some common data-crunching and cleaning tasks.

Each of these tasks are things that could be done without us. We’re middlemen, after all, as derided a professional class as there has ever been. But in every deal we minimize a big risk of failure.

Negotiation is tough and can break down easily and data cleaning is a pain; most importantly, though, even a small market like mine changes ALL the time and by acting as a clearinghouse for relationships, we facilitate a more competitive reinsurance market (ie maximize terms for our clients in a macro-sense, as well as by being individually awesome).

So let’s go back to Fab.com. They sent out the data and had a quick negotiation in which, they claim, price wasn’t much discussed. Wow, wouldn’t you want to be Andreessen Horowitz in this case? Name your price!

Maybe Fab.com have such a powerful business model that they only need to show some trend numbers and have a quick chat and wrap up 50 million bucks. But as much as it warms an engineer’s cockles to hear a story where a great data system wins the day, all of my instincts tell me that these guys got screwed.

If a deal goes down with so little pain somebody left money on the table.
——
Update:

I just read this by Mark Suster which gives me a clue for why there might be a bigger pie at stake than I thought:

And anybody who follows this blog knows that I believe television disruption has already begun and it is more likely to resemble Internet content than streaming long-form content to our living rooms.

As I talked about this model with several friends in Silicon Valley I always heard the same refrain, “we don’t invest in content business – they are ‘hits driven’.”

I had to laugh a bit at at the irony of this. For one, the consumer-driven startup world has become immensely hits driven. You need star power of entrepreneurs surrounded by star power angels & VCs who in turn get tons of press from adoring journalists who are insiders amongst this crowd of tech cognoscenti.

Publicity! Big-time VCs are tech celebrities, of course, and affiliation with them can legitimize you in some important circles like with early adopters, journalists and investment bankers who will one day give you your big money exit.

Still, I need to swap my “let’s build a business!” hat for my cynic hat to have this make sense.

Then again, maybe legit publicity is actually so value-creating that it’s worth 10-30% of the upside.

Today’s Fad

Here’s a quote from HN in response to the question: what’s the big deal with Machine Learning?

There’s this enormous focus on ‘web scale’ technologies. This focus necessarily invokes visualizing and making sense of terabytes and eventually even petabytes of data; conventional approaches would take thousands or millions of man hours to accomplish the same level of analysis that computers can perform in hours or days.

I totally agree. I’ve joined a few technology meetup groups here in NY and so far I’ve had interesting reactions to my field of expertise. I basically say my job is predictive models, but on small-mid-sized datasets. Cue disappointment.

Everyone is focused on predictive models that crunch BIG DATA. I’m taking a course on ML but I don’t do BIG DATA.

There are two kinds of big, you see. You can have a list of a billion addresses, but that doesn’t really qualify as BIG big.  People can get their heads around what to do with a billion addresses: what regressions to run, what information can be reasonably gleaned from analyzing it.

BIG big is different. BIG refers to a HUGE number of parameters that might or might not be meaningful. Think about some problems that are common applications of ML:

For highly dimensional problems, such as text classification (i.e., spam detection) or image classification (i.e., facial detection), it’s almost impossible to hard code an algorithm to accomplish its goal without using machine learning. It’s much easier to use a binary spam/not spam or face/not face labeling system that, given the attributes of the example, can learn which attributes beget that specific label. In other words, it’s much easier for a learning system to determine what variables are important in the ultimate classification than trying to model the “true” function that gives rise to the labeling.

BIG means you don’t really know what you should do with the data. You kinda know what answer you want, but you can’t really hold a thousand or ten thousand different parameters in your head long enough to specify some kind of regression.

Now think about technology trends today. Computing power, bandwidth and memory capacity are all now cheap enough that computers can handle BIG better than humans can. THAT’s interesting.

In my professional life (sadly or happily, depending on how fired up I am to do ML), I don’t tend to get that many parameters.  Insurance is rated on a very few rock solid inputs and the rest is just sniffing out when someone is trying to screw you over.

But I’m intrigued, nonetheless. Woe to the ambitious one who doesn’t keep a tab on the cutting edge.

Here’s a link to yet another ML class.

Listen When This Man Speaks (about his business)

I think he’s the greatest non-founder executive to have walked the earth, and Jim Lynch points us to an extended treatment of Hank Greenberg’s management style (the technical stuff, not the bombast), including an interview:

Greenberg said, “You don’t want to roll your company up with undefinable risk. You have to understand the risk. The insurance industry is the only industry where you never really know the results at the end of the year. You may think you know, but you don’t. The tail on a risk could be 10 years, so you don’t really know.”

So how do you mitigate that seeming lack of understanding? “Experience is very valuable to be able to predict those costs,” he said.

“I don’t want to wake up one morning and say ‘What happened?’ ” Greenberg said.

Felix Kloman, a former Towers Perrin partner and a well-known commenter on the subject of risk, said, “Organizations can easily become risk averse. You want them to take on risk in the future and too often risk management defines risk as a negative outcome.”

Kloman said that Greenberg is the exception. “Hank is much more of a risk-taker. The CEO coordinates and encourages intelligent risk-taking.”

Here’s how insurance works: clients hand insurers money and time-bombs, which they toss into a warehouse. Luckily, most time-bombs are duds and, when they do go off, the walls of the warehouse are strong enough to withstand the bang.

Obviously you want as many time-bombs as you can get because you want the money, too. You can use that money to build thicker walls on your warehouse, allowing you to stuff more bombs in there. The problem is that, all too often, insurers don’t find out they’ve overbought time-bombs until it’s too late.

All you can do then is sit there and watch them go off.

Striking that balance between growth and risk management absolutely boggles the mind and, frankly, gets the best of many, many executives.

Hank Greenberg was/is better at that balance than anyone else on earth.