No Renewal Due to Adverse Selection — Superversive

Insurance companies like AIG spend a lot of money on catastrophic modeling systems, weather-related databases, and actuaries to have more and better information to place calculated bets on future outcomes. Insurance is regulated at the state level. There are 50 different insurance ecosystems i

Superversive

@hankg @JoshuaHolland

It is •impossible• that automobile #insurance actuaries are not tracking all of this and increasing premiums to avoid #AdverseSelection.

Actuaries: World’s Oldest Data Scientists — Superversive

In the US, there are 50 regulatory regimes with oversight, math nerds trying to avoid adverse selection , and bonuses tied to effectively maintaining mandated loss ratios. When math calculations & simulations demonstrate potential losses - due to severity and duration of events - outweighi

Superversive

@ZaneSelvans
Climate change will break the insurance industry before it breaks everything else.

Actuaries are the world’s oldest data scientists.

The math doesn’t lie.

They are paid to avoid #AdverseSelection.

When the risks manifest everywhere, on everything, all the time, actuarial models collapse.

#AdverseSelection is short-hand for “not worth it” and #ThingsFallApart.
No Renewal Due to Adverse Selection — Superversive

Insurance companies like AIG spend a lot of money on catastrophic modeling systems, weather-related databases, and actuaries to have more and better information to place calculated bets on future outcomes. Insurance is regulated at the state level. There are 50 different insurance ecosystems i

Superversive
Actuaries: World’s Oldest Data Scientists — Superversive

In the US, there are 50 regulatory regimes with oversight, math nerds trying to avoid adverse selection , and bonuses tied to effectively maintaining mandated loss ratios. When math calculations & simulations demonstrate potential losses - due to severity and duration of events - outweighi

Superversive

"One million #Florida properties are projected to become chronically flooded: properties that today fund nearly 30% of local revenues for more than half of the state’s municipalities, according to a new study conducted by researchers at Cornell and Florida State Universities."

https://www.wmfe.org/environment/2023-10-16/sea-level-rise-drain-floridas-financial-future

#AdverseSelection #RiskManagement #insurance #sustainability

New study projects sea level rise to drain Florida’s financial future

One million Florida properties are projected to be underwater. Today, those properties fund nearly 30% of local revenues for more than half the state's municipalities.

WMFE

When insurance companies pull out of markets due to adverse selection, it reminds me of Ben Shapiro saying people would just move.

Sure, bro.

https://sampathpanini.medium.com/no-renewal-due-to-adverse-selection-123d0c6832bb

#insurance #math #statistics #RiskManagement #AdverseSelection

Of Models and Tin Men
https://arxiv.org/abs/2307.11137
An ambitious research agenda:
"In a #PrincipalAgentProblem, conflict arises because of information asymmetry together with inherent misalignment between the utility of the agent and the principal… argue the assumptions underlying principal-agent problems are
crucial to capturing the essence of safety problems involving pre-trained AI models in real-world situations."
#llm #AIEthics #MoralHazard #AdverseSelection #economics
Of Models and Tin Men: A Behavioural Economics Study of Principal-Agent Problems in AI Alignment using Large-Language Models

AI Alignment is often presented as an interaction between a single designer and an artificial agent in which the designer attempts to ensure the agent's behavior is consistent with its purpose, and risks arise solely because of conflicts caused by inadvertent misalignment between the utility function intended by the designer and the resulting internal utility function of the agent. With the advent of agents instantiated with large-language models (LLMs), which are typically pre-trained, we argue this does not capture the essential aspects of AI safety because in the real world there is not a one-to-one correspondence between designer and agent, and the many agents, both artificial and human, have heterogeneous values. Therefore, there is an economic aspect to AI safety and the principal-agent problem is likely to arise. In a principal-agent problem conflict arises because of information asymmetry together with inherent misalignment between the utility of the agent and its principal, and this inherent misalignment cannot be overcome by coercing the agent into adopting a desired utility function through training. We argue the assumptions underlying principal-agent problems are crucial to capturing the essence of safety problems involving pre-trained AI models in real-world situations. Taking an empirical approach to AI safety, we investigate how GPT models respond in principal-agent conflicts. We find that agents based on both GPT-3.5 and GPT-4 override their principal's objectives in a simple online shopping task, showing clear evidence of principal-agent conflict. Surprisingly, the earlier GPT-3.5 model exhibits more nuanced behaviour in response to changes in information asymmetry, whereas the later GPT-4 model is more rigid in adhering to its prior alignment. Our results highlight the importance of incorporating principles from economics into the alignment process.

arXiv.org