Actuarial science has changed over time. Insurance markets have changed over time. Technology has changed over time. To understand where we are today with commercial insurance pricing, we need to understand how we got here.
Over the last 125 years, commercial insurance pricing shifted dramatically from static retrospective analysis to dynamic probabilistic analysis. During the turn of the twentieth century, actuaries began to devise a more systematic, scientific approach to insurance pricing. In fact, the development of the Casualty Actuarial Society began with the workers compensation (WC) line of business in 1914. These early actuaries recognized the need to provide a solid financial foundation to support the complex nature of WC benefits. The approach that emerged was a statistically and financially sound risk classification system.
Risk Classification Methodology
This risk classification approach centers around "expected value" pricing for groupings (classes) of individual risks. Each classification group acts as a separate risk collective to meet the "law of large numbers" constraints for independent, identically distributed risk elements. These classes can then be aggregated within and across classification groups through the "central limit theorem." This approach required the collection and retention of substantial historical exposure and loss data across all industries.
Though statistically sound, this approach formed a barrier to entry for individual insurers that were unable to develop such an extensive dataset. As a solution, data was collected and aggregated across the entire industry for all classes from all writers. To further complement this data aggregation, it was essential that coverage benefits were uniformly defined for all insurers. Even today, several traditional commercial line coverages (e.g., WC, general liability, or auto) continue this rating approach. Risk classification rating retains a significant limitation—it can be a poor method for estimating an accurate price for a single risk.
Risk classification is based on attempts to achieve greater risk homogeneity across insureds within a classification cell. Its purpose is to achieve closer groupings of independent and identically distributed risks required as basic assumptions by the law of large numbers and the central limit theorem. The insurance strategy is to achieve stability in the volume of insureds. The theory dictates that gathering a greater number of exposures would lead the actual losses to be closer to expected losses.
The Role of Rating Bureaus
The data aggregation needs led to rating bureaus. Insurers that are members of rating bureaus could use the aggregated "base pure loss cost" rates, which they would then adjust for operating expenses and profit load (risk margin). The insurer then focuses on (1) individual risk selection, and (2) appropriate class assignment. The pure loss cost applied to the individual insured is simply the average loss cost of all members assigned to the classification group.
Unfortunately, in many commercial property and casualty applications, individual risks within the group can have significantly different risk profiles. Assignment to the group is based only on the assumption that the expected claim frequency of the insured will be statistically similar to the mean claim frequency of the group. Additionally, from an underwriting perspective, few risks are rejected (little risk selection). Instead, the central underwriting question becomes how to assign the individual risk to the appropriate risk class.
Individual Risk Rating Methodology
Not all commercial risks are priced based on this risk classification methodology. Individual risk rating (IRR) was designed to offset an observed lack of homogeneity across insureds. IRR is based on a seriatim (risk-by-risk) evaluation of prospective insureds. In practice, IRR relies on an individual underwriter's view of the insured's risk profile.
The premium charge can be developed based on the unique risk profile of the insured, the individual application of the insurer's offered standard coverage benefits, and finally—and most importantly—allowed variability in insuring clauses, policy limits, deductibles, etc. for this one policy. Additionally, the insurer may consider offering multiple coverages in an integrated policy (i.e., package discounts) in the pricing structure.
The goal of the IRR insurance strategy is to achieve stability over time for the individual insured risk. The IRR approach assumes that the insured maintains risk homogeneity from period to period. IRR assumes that the greater number of exposure periods for this individual risk leads to loss experience aggregated over time that will approximate expected losses, in spite of the volatile losses from period to period. Today, IRR analysis can adjust for coverage conditions as well as the integration across multiple coverages and may be enhanced through simulation methods to develop a better understanding of the entity's underlying risk distributions. Depending on exposures and coverages, the individual insured may now represent its own risk collective.
As outlined above, underwriting models applying IRR analysis are prospective in nature. They can directly address the expected exposures to idiosyncratic risks on a seriatim (risk-by-risk) basis. Interestingly, the underwriting methodology design for the identification of these idiosyncratic risks can appear to be replicas of a classification rating model. However, the purpose of these individual risk models is to provide the insurer with a unique understanding of an insured's idiosyncratic risk and, thereby, an enhanced understanding of the collective risks across all insureds—not simply to assign an individual insured to a specific risk classification group.
A potential benefit is to reduce the cost of risk tracking error across insureds (a significant cost in risk classification systems). In an IRR approach, an insured with higher risk factors is charged a higher premium than an insured with lower identified risk factors. Frequency measures are represented on a continuum, rather than the risk classification (epoch cell) approach (i.e., all insureds have the same frequency in the class). In addition, under IRR, a priori measures of claim severity are allowed to vary more freely, with greater use of deductibles, policy limits, and coverage rules.
From an underwriting perspective, the IRR central question is whether the insurer will accept this risk and, if so, under what limits and price. In IRR, the final premium is based not only on risk-based observations but on market conditions as well.
Updating the Pricing Formulae
These two commercial pricing/underwriting approaches can be represented by pricing formulae. Historically, classification rate-making or individual risk rating analysis were limited to the cost of risk and fixed profit loads—capital was not a consideration. Pure loss cost rate algorithms were based solely on the expected losses and risk margin, and risk margins were a predefined multiple of the expected losses.
The formula was Premium = Expected Loss + Risk Margin.
P = E[L] + ƛ E[L] = (1+ƛ) E[L], where
P = Premium
E[L] = Expected Loss
ƛ E[L] = Risk Margin, where ƛ is determined as fixed 2.5 percent or 5 percent of the E[L]
Around 35 years ago, Stewart Myers and Richard Cohn, when exploring acceptable rate methodologies for the Massachusetts Department of Insurance, introduced more modern financial concepts for economically fair premium calculations.1 The rate condition indicated an economically fair premium to be equal to the expected losses plus the expected risk margin paid to capital providers. This introduced the need to directly consider capital in the pricing model.
The formula is Premium = Expected Loss + Returns, all discounted by Return on Equity.
P = {E[L] + A RS} / (1 + RS), where
P = Premium
E[L] = Expected Loss
A = Assets = P + S
RS = Return on Equity, S = A - P
The Economic Fair Premium is the basis of modern pricing theory for insurance risks.
Conclusion—Price Must Consider Capital
Traditional insurance pricing has recognized that the expected losses alone are not sufficient to cover the cost of insurance policy risk transfer. Earlier rate-making approaches simply assigned a specific profit load (ƛ E[L]) regardless of capital considerations. Modern pricing theory directly recognizes the uncertainty associated with the loss distribution associated with an aggregated risk portfolio. Premiums consisting only of the expected loss will not be sufficient to meet aggregated claim payments. Capital, then, is required to support the adverse loss conditions. Economically, capital commitments must be compensated at a reasonable rate.
Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.
Actuarial science has changed over time. Insurance markets have changed over time. Technology has changed over time. To understand where we are today with commercial insurance pricing, we need to understand how we got here.
Over the last 125 years, commercial insurance pricing shifted dramatically from static retrospective analysis to dynamic probabilistic analysis. During the turn of the twentieth century, actuaries began to devise a more systematic, scientific approach to insurance pricing. In fact, the development of the Casualty Actuarial Society began with the workers compensation (WC) line of business in 1914. These early actuaries recognized the need to provide a solid financial foundation to support the complex nature of WC benefits. The approach that emerged was a statistically and financially sound risk classification system.
Risk Classification Methodology
This risk classification approach centers around "expected value" pricing for groupings (classes) of individual risks. Each classification group acts as a separate risk collective to meet the "law of large numbers" constraints for independent, identically distributed risk elements. These classes can then be aggregated within and across classification groups through the "central limit theorem." This approach required the collection and retention of substantial historical exposure and loss data across all industries.
Though statistically sound, this approach formed a barrier to entry for individual insurers that were unable to develop such an extensive dataset. As a solution, data was collected and aggregated across the entire industry for all classes from all writers. To further complement this data aggregation, it was essential that coverage benefits were uniformly defined for all insurers. Even today, several traditional commercial line coverages (e.g., WC, general liability, or auto) continue this rating approach. Risk classification rating retains a significant limitation—it can be a poor method for estimating an accurate price for a single risk.
Risk classification is based on attempts to achieve greater risk homogeneity across insureds within a classification cell. Its purpose is to achieve closer groupings of independent and identically distributed risks required as basic assumptions by the law of large numbers and the central limit theorem. The insurance strategy is to achieve stability in the volume of insureds. The theory dictates that gathering a greater number of exposures would lead the actual losses to be closer to expected losses.
The Role of Rating Bureaus
The data aggregation needs led to rating bureaus. Insurers that are members of rating bureaus could use the aggregated "base pure loss cost" rates, which they would then adjust for operating expenses and profit load (risk margin). The insurer then focuses on (1) individual risk selection, and (2) appropriate class assignment. The pure loss cost applied to the individual insured is simply the average loss cost of all members assigned to the classification group.
Unfortunately, in many commercial property and casualty applications, individual risks within the group can have significantly different risk profiles. Assignment to the group is based only on the assumption that the expected claim frequency of the insured will be statistically similar to the mean claim frequency of the group. Additionally, from an underwriting perspective, few risks are rejected (little risk selection). Instead, the central underwriting question becomes how to assign the individual risk to the appropriate risk class.
Individual Risk Rating Methodology
Not all commercial risks are priced based on this risk classification methodology. Individual risk rating (IRR) was designed to offset an observed lack of homogeneity across insureds. IRR is based on a seriatim (risk-by-risk) evaluation of prospective insureds. In practice, IRR relies on an individual underwriter's view of the insured's risk profile.
The premium charge can be developed based on the unique risk profile of the insured, the individual application of the insurer's offered standard coverage benefits, and finally—and most importantly—allowed variability in insuring clauses, policy limits, deductibles, etc. for this one policy. Additionally, the insurer may consider offering multiple coverages in an integrated policy (i.e., package discounts) in the pricing structure.
The goal of the IRR insurance strategy is to achieve stability over time for the individual insured risk. The IRR approach assumes that the insured maintains risk homogeneity from period to period. IRR assumes that the greater number of exposure periods for this individual risk leads to loss experience aggregated over time that will approximate expected losses, in spite of the volatile losses from period to period. Today, IRR analysis can adjust for coverage conditions as well as the integration across multiple coverages and may be enhanced through simulation methods to develop a better understanding of the entity's underlying risk distributions. Depending on exposures and coverages, the individual insured may now represent its own risk collective.
As outlined above, underwriting models applying IRR analysis are prospective in nature. They can directly address the expected exposures to idiosyncratic risks on a seriatim (risk-by-risk) basis. Interestingly, the underwriting methodology design for the identification of these idiosyncratic risks can appear to be replicas of a classification rating model. However, the purpose of these individual risk models is to provide the insurer with a unique understanding of an insured's idiosyncratic risk and, thereby, an enhanced understanding of the collective risks across all insureds—not simply to assign an individual insured to a specific risk classification group.
A potential benefit is to reduce the cost of risk tracking error across insureds (a significant cost in risk classification systems). In an IRR approach, an insured with higher risk factors is charged a higher premium than an insured with lower identified risk factors. Frequency measures are represented on a continuum, rather than the risk classification (epoch cell) approach (i.e., all insureds have the same frequency in the class). In addition, under IRR, a priori measures of claim severity are allowed to vary more freely, with greater use of deductibles, policy limits, and coverage rules.
From an underwriting perspective, the IRR central question is whether the insurer will accept this risk and, if so, under what limits and price. In IRR, the final premium is based not only on risk-based observations but on market conditions as well.
Updating the Pricing Formulae
These two commercial pricing/underwriting approaches can be represented by pricing formulae. Historically, classification rate-making or individual risk rating analysis were limited to the cost of risk and fixed profit loads—capital was not a consideration. Pure loss cost rate algorithms were based solely on the expected losses and risk margin, and risk margins were a predefined multiple of the expected losses.
The formula was Premium = Expected Loss + Risk Margin.
Around 35 years ago, Stewart Myers and Richard Cohn, when exploring acceptable rate methodologies for the Massachusetts Department of Insurance, introduced more modern financial concepts for economically fair premium calculations. 1 The rate condition indicated an economically fair premium to be equal to the expected losses plus the expected risk margin paid to capital providers. This introduced the need to directly consider capital in the pricing model.
The formula is Premium = Expected Loss + Returns, all discounted by Return on Equity.
The Economic Fair Premium is the basis of modern pricing theory for insurance risks.
Conclusion—Price Must Consider Capital
Traditional insurance pricing has recognized that the expected losses alone are not sufficient to cover the cost of insurance policy risk transfer. Earlier rate-making approaches simply assigned a specific profit load (ƛ E[L]) regardless of capital considerations. Modern pricing theory directly recognizes the uncertainty associated with the loss distribution associated with an aggregated risk portfolio. Premiums consisting only of the expected loss will not be sufficient to meet aggregated claim payments. Capital, then, is required to support the adverse loss conditions. Economically, capital commitments must be compensated at a reasonable rate.
Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.