A letter arrived a couple weeks ago. Professional letterhead, formal language, deeply apologetic tone. A company I had never heard of — had never done business with, had never knowingly interacted with — was writing to inform me that my personal information had been compromised in a data breach. Social Security number. Financial data. The works.
My first reaction wasn’t panic. It wasn’t even surprise. It was something closer to exhaustion. Because this, by my rough count, is approximately the eighth time this has happened to me. My data is already out there, floating around the dark web like a digital tumbleweed. At this point I’m less a private citizen and more a publicly traded asset.
But here’s what actually got under my skin: I had no relationship with this company. None. Zero.
What I had — somewhere in the fine print of some other service I’d signed up for, buried inside a six-page block of legalese specifically engineered to never be read by any living human — was a checkbox I clicked without fully understanding the cascade of third parties and sub-processors and “trusted partners” that checkbox quietly enrolled me in. You agreed. You consented. It says so right there on page four, paragraph eleven, subsection (c)(ii), immediately after the part about governing jurisdiction and immediately before the part that would require a law degree to parse.
Welcome to the data economy. You’re the product. Nobody told you.
This Keeps Happening Because Nothing Happens
Let me be precise about something: this problem doesn’t persist because hackers are impossibly sophisticated. It persists because there is no meaningful penalty for letting it happen. None. The math is simple and brutal: the cost of getting breached is lower than the cost of not getting breached. Until that changes, nothing else does.
Think about 2025 alone. The parade of catastrophic failures is almost too long to read without needing a drink.
TransUnion — the company whose entire reason for existing is to safeguard your financial identity — was breached through a compromised third-party application, exposing full names, PII, and Social Security numbers [[1]] (4.4 million records exposed). Yale New Haven Health [[2]] (5.5 million records exposed). PowerSchool, the platform running inside school districts across the country [[3]] (71.5 million records exposed). Prosper Marketplace — the largest single breach of the year by record count [[4]] (17.6 million records exposed). A single Salesforce supply chain attack that touched over 700 organizations [[5]] (over one billion records exposed). And a misconfigured database at Mars Hydro — just left wide open, no password, sitting on the public internet like a lost wallet on a busy sidewalk [[6]] (2.7 billion records exposed).
And what happened to the executives responsible? What was the consequence?
They kept their jobs. The bonuses kept flowing. The company sent you a letter with a code for twelve months of free credit monitoring — a gesture so cosmetically inadequate it would be funny if the damage weren’t permanent. Your data is out there forever. A one-year subscription to a service that tells you after the fact that your identity was stolen is not a remedy. It’s a participation trophy for the victimized. It’s the corporate equivalent of backing over your neighbor’s dog, handing them a gift card to Petco, and calling it square.
No CTO went to prison. No CEO was fired without a golden parachute large enough to cushion a fall from orbit. The companies didn’t fold. They issued press releases, hired PR firms to draft appropriately somber apologies, and moved on.
And so of course it happens again. Of course it does.
Let’s Also Talk About the Operating System Nobody Wants to Mention
Here’s something that should be said plainly: the overwhelming majority of these breaches are happening on Microsoft Windows infrastructure. Unpatched systems. Misconfigured environments. Known vulnerabilities sitting open for months or years because remediation costs money, and money flows upward toward people who do not personally bear the consequences of doing nothing.
Microsoft has spent thirty years building the world’s most-used operating system and roughly twenty-nine of those years defending against the entirely predictable consequences of its own architectural decisions. Every year there are hundreds of patches for critical vulnerabilities. Every year, organizations fail to apply them. Every year, people lose their data.
Accountability here should be proportional to culpability. There’s a meaningful difference between a company that was hit by a genuine zero-day exploit — a previously unknown vulnerability that no reasonable patch management program could have addressed — and a company that got ransomwared because they were running software they hadn’t patched since the last administration. If a zero-day in a vendor’s product is the root cause, that vendor should bear real financial liability. If the company simply failed to patch systems they knew were vulnerable, that failure belongs entirely to them.
Right now, neither party bears meaningful liability for anything. The vendor ships. The company deploys and ignores. Something catastrophic happens. The person who loses their Social Security number gets a Petco gift card.
There’s Actually a Legal Precedent for This
Before we get to solutions, here’s something worth knowing: the concept of forcing companies to pre-fund liability for harm they might cause is not a new or radical idea. American law has done it before, in two important contexts.
The first is environmental cleanup. Under CERCLA — the Comprehensive Environmental Response, Compensation, and Liability Act, better known as Superfund — Congress established a framework that holds polluters strictly liable for contamination, follows that liability through corporate restructurings, and maintains a funded trust to cover cleanup costs when responsible parties can’t or won’t. The core principle: if you’re in the business of creating a potential hazard, you pre-fund the remediation. [[7]] (CERCLA/Superfund liability framework)
The second is nuclear decommissioning. The Nuclear Regulatory Commission requires operators of nuclear power plants to maintain funded decommissioning accounts throughout the life of the plant — real money, set aside in advance, specifically to cover the cost of cleaning up after themselves when they’re done. The liability is pre-funded before the harm occurs, not scrambled for afterward. [[8]] (NRC decommissioning funding requirements)
The same logic applies here. You are storing hazardous material. The hazardous material is other people’s most sensitive personal information. The harm from its release is real, lasting, and entirely foreseeable. Pre-fund the liability.
This isn’t a new legal theory. It’s an existing one applied to a new class of hazardous actors.
A Modest Proposal With Actual Numbers
Congress won’t fix this. Let’s just stipulate that upfront and save ourselves some time. I’ll deal with Congress in a moment, but first — here’s a framework built for the real world, one with teeth, and with numbers that make the stakes legible.
Escrow-backed accountability, proportional to what you’re holding.
Any company that stores PII is required to place funds in escrow — in cash, not in leveraged financial instruments conjured by a creative CFO on a Thursday afternoon — scaled to the sensitivity of the data they hold. Something like this:
| Data Type | Escrow Per Person |
|---|---|
| Basic PII (name, address, phone) | $1,000 |
| Social Security Number | +$1,000 |
| Health / Medical Data | +$500 |
| Financial Account Information | +$500 |
| Sexual Orientation / Gender Identity | +$1,000 |
| Race / Ethnicity | +$1,000 |
Now let’s make that real. A mid-sized company — say, 50,000 customers — storing names, contact info, SSNs, and basic financial data is looking at a minimum escrow requirement of roughly $125 million. Not a line item in a risk management spreadsheet. Not a theoretical exposure noted in a 10-K filing. Real money, sitting in a real account, that belongs to real people the moment something goes wrong.
For a company like TransUnion, sitting on the financial identities of hundreds of millions of Americans, the escrow figure would be measured in the hundreds of billions. Which is, not coincidentally, an accurate reflection of how much damage they’re capable of causing. Maybe that number should exist before the breach, not just in the class action filing afterward.
Suddenly “we’ll deal with security next quarter” becomes a very different conversation in the boardroom.
Automatic cash payouts. Per person. No asterisks.
When a breach occurs, that escrow money flows directly to the individuals affected. Not into a settlement fund where the plaintiff attorneys collect $40 million and you receive a $7.50 credit toward a future purchase. A direct, per-person cash payout — no claims process, no 90-day review window, no form to fill out in triplicate.
If the company cannot cover those payouts — if the escrow has mysteriously evaporated, as money has a habit of doing when executives are involved — breach victims are first in line in any bankruptcy proceeding. Before the creditors. Before the bondholders. Before the institutional investors who bet on a company that couldn’t be bothered to hire a competent security team.
Real long-term coverage. A decade, not a year.
Mandatory ten-year, 100% covered identity theft dispute support. Because the damage from an exposed SSN doesn’t expire in twelve months. It shows up three years from now when you’re trying to buy a house. It shows up five years from now in a background check. It shows up when some fraudster files a tax return in your name in 2031. The liability should be as long-lived as the harm.
A registry for executives who lose the data.
FINRA — the Financial Industry Regulatory Authority — maintains a public database called BrokerCheck. Every registered broker and investment advisor in the country has a record in it. Customer complaints, regulatory actions, terminations for cause — all of it is publicly visible to anyone considering doing business with that person or hiring them. It’s not a criminal sanction. It’s a transparency mechanism. It’s how the financial industry decided that the people entrusted with other people’s money should have a publicly searchable track record.
It works. It’s accepted as completely normal. Nobody argues that it’s unfair to financial professionals.
I want the same thing for the C-suite executives who preside over material data breaches — a public, searchable record tied to their professional identity. Not a criminal conviction. Not a civil judgment. Just a transparent, accessible account of the fact that Company X, under your leadership, lost the personal data of three million people because you made certain decisions about security investment, vendor oversight, and patch management.
The next board evaluating you for a CTO role sees that record. They can make an informed decision. The market responds to information. Right now, it has none.
We already do this for stockbrokers. We already do it for sex offenders. We apparently draw the line at the people holding your medical records and Social Security number.
So How Does Any of This Actually Happen?
Ah. The part where we talk about Congress.
I’ll keep this brief, because there isn’t much to say. Congress — both parties, every flavor, across every administration in recent memory — has had ample opportunity to address data privacy and corporate accountability, and has produced a thicket of incomplete, toothless, industry-lobbied non-solutions that accomplish roughly nothing. This is not a partisan observation. Republicans defer to corporations on the grounds of regulatory burden. Democrats defer to corporations on the grounds of innovation and campaign finance. The outcome is identical: you get a breach notification letter and a Petco gift card, and the executives get their bonuses.
Don’t let anyone tell you this is a left problem or a right problem. It is a money problem. The companies that are systematically failing to protect your data are also systematically funding both sides of the aisle, and Congress is very good at biting the hand that doesn’t feed it. Yours, specifically.
The Senate Commerce Committee will hold a hearing. Stern questions will be asked. Executives will express profound regret and deep commitment to doing better. Nothing will happen. The cameras will move on. Rinse and repeat.
So no. Not Congress.
The path is through the states. Ballot initiatives. State-level market access requirements: if your company — or your vendors, or your vendors’ vendors — handles the PII of state residents, here are the rules for doing business here. California has taken meaningful steps in this direction with the CCPA. Colorado has the frameworks to build on. The playbook exists. What’s needed is the political will to force it onto ballots in states where legislatures won’t touch it voluntarily, and then to defend it against the inevitable avalanche of well-funded corporate opposition that will follow.
This is exactly how other consumer protections that corporations opposed have made it into law. It is slow, expensive, and unglamorous. It is also the only mechanism that actually works when the federal government has been captured by the industry it’s supposed to regulate.
A Direct Appeal
If you’ve read this far and you’re an attorney — particularly one who works in consumer protection, privacy law, or civil litigation — I’d genuinely like to hear from you. The framework outlined above isn’t a policy fantasy. It’s a coherent legal theory with existing precedent, applied to a new class of harm. What it needs is people with the standing and expertise to explore what it would take to make it real, whether through legislation, litigation, or ballot initiative strategy.
If that sounds like your kind of fight, send me a DM.
Quick poll for the comments: how many data breach notification letters have you received in the last five years? Drop a number. I’m genuinely curious whether my eight is above average, below average, or just another Tuesday.
The author is a renaissance man — martial artist, mead maker, beekeeper, carpenter, cook, and published author — as well as a technologist with 20+ years building complex software systems spanning cloud infrastructure, contact center solutions, and AI-driven architecture. The opinions expressed are his own, offered freely, and delivered with the full understanding that at least three companies he has never heard of already have his home address.
References
[1] Top Data Breaches in 2025 — Month-Wise — TransUnion breach exposing 4.4 million records via compromised third-party application.
[2] Biggest Data Breaches of 2025 — Analysis — Yale New Haven Health breach affecting 5.5 million patient records.
[3] Top Data Breaches of 2025 and Lessons Learned — PowerSchool breach impacting 62 million students and 9.5 million teachers.
[4] Biggest Data Breaches of 2025 — Analysis — Prosper Marketplace breach exposing 17.6 million records including SSNs and financial data.
[5] Top Third-Party Data Breaches in 2025 — Salesforce supply chain attack affecting 700+ organizations and over one billion records.
[6] Top Data Breaches of 2025 and Lessons Learned — Mars Hydro misconfigured database exposing 2.7 billion records.
[7] CERCLA and Federal Facilities — EPA overview of Superfund liability framework for environmental contamination.
[8] Nuclear Regulatory Commission Decommissioning Funding — GAO report on pre-funded decommissioning requirements for nuclear power plants.
