List brokers became data brokers and nobody updated the ethics

Add DMNews to your Google News feed.
  • Tension: We inherited an entire industry built on selling people’s information, but the scale and intimacy of that information has transformed beyond recognition.
  • Noise: Privacy debates focus on tech giants and hackers while the quiet, legal data trade operates in plain sight with outdated moral frameworks.
  • Direct Message: The ethics of selling a mailing list cannot stretch to cover the sale of someone’s psychological profile, health data, and predicted vulnerabilities.

To learn more about our editorial approach, explore The Direct Message methodology.

There was a time when the most invasive thing a company could do with your information was sell your name and address to a catalog company. You might receive an unwanted brochure for garden supplies or life insurance. The worst outcome was a cluttered mailbox and mild annoyance.

That industry never disappeared. It evolved. The companies that once sold mailing lists now sell behavioral predictions, health inferences, relationship statuses, and psychological vulnerability scores. The business model remained fundamentally the same: collect information about people, package it, sell it to whoever will pay. What changed was everything else.

During my time working with tech companies in the Bay Area, I watched this transformation accelerate. The same infrastructure that once moved names between direct mail houses now moves intimate behavioral data between thousands of companies most consumers have never heard of. The legal frameworks governing this trade were written for an era of printed subscriber lists and coupon redemption tracking. The ethical conversations never caught up.

We still talk about “data privacy” as though we’re debating whether your phone number should be in the White Book. Meanwhile, companies assemble profiles that predict your likelihood of developing depression, your probability of defaulting on a loan, your susceptibility to gambling addiction. The gap between what we’re discussing and what’s actually happening has become a chasm.

The Quiet Inheritance of an Unexamined Industry

The data brokerage industry traces its lineage to list brokers who emerged in the mid-twentieth century. These companies served a straightforward function: they helped businesses find potential customers by selling access to compiled mailing lists. Magazine subscribers, catalog buyers, association members. The ethical framework was simple. People had provided their information to join something or buy something, and that information could be shared with similar businesses. Consent was implied, boundaries were clear, and the worst consequence was junk mail.

This industry operated largely without controversy because the stakes were low. If a list broker sold your name to five different vitamin companies, you received five more catalogs. The information being traded was surface-level: names, addresses, perhaps a general interest category. The Federal Trade Commission’s 2014 report on data brokers documented how these companies evolved from selling simple contact lists to trafficking in thousands of data points per individual, including sensitive health conditions, financial difficulties, and family situations.

What I’ve found analyzing consumer behavior data is that most people have no mental model for how this industry operates. They understand that Facebook shows them targeted ads. They vaguely know that Google tracks their searches. But the ecosystem of data brokers working behind these platforms, buying and selling information from hundreds of sources, combining and recombining data to create detailed psychological profiles, remains invisible to the average consumer.

The companies performing this work often have names designed to be forgettable: Acxiom, Experian, Oracle Data Cloud, LiveRamp. They process information about hundreds of millions of people, categorizing them into segments with names that reveal the intimacy of their knowledge. “Diabetes Interest,” “Substance Abuse Interest,” “Financial Hardship,” “Expectant Parent.” These categories are then sold to advertisers, insurance companies, employers, landlords, and anyone else willing to pay.

The ethical rules governing this trade were written when the product was a printed list of names. Nobody sat down and asked whether those same rules should apply when the product is a prediction about someone’s mental health status.

Why Privacy Debates Miss the Structural Problem

Public conversations about data privacy have become dominated by a handful of recurring narratives. We discuss data breaches and hackers. We debate whether social media companies should be broken up. We argue about whether we should “pay” for free services with our data. These conversations, while important, consistently miss the structural reality of how personal information actually moves through the economy.

The Electronic Frontier Foundation has documented how data brokers operate in the spaces between platforms, aggregating information from public records, purchase histories, loyalty programs, app usage, and dozens of other sources. This aggregation transforms individually innocuous data points into detailed profiles that can be used to make consequential decisions about people’s lives.

Consider the fictional but representative scenario from Tactical Tech’s Me and My Shadow project: a job applicant screened out because a purchased profile suggested personality disorders and lack of discipline. This profile was assembled from fitness tracking data, survey answers, social media likes, and dating site activity. No single piece of information was particularly sensitive. Combined, they produced a consequential and potentially inaccurate judgment about someone’s character and capabilities.

The noise in our privacy discussions comes from treating this as a technology problem or a corporate ethics problem when it is fundamentally a market structure problem. We have an entire industry whose business model depends on knowing as much as possible about as many people as possible, then selling that knowledge to whoever will pay. The incentives point in one direction only: toward more collection, more combination, more inference, more sale.

Behavioral economics research has shown that people consistently underestimate future consequences of present disclosures. We share information expecting it to be used in one context, unable to imagine how it might be recombined and redeployed years later for purposes we never anticipated. The data brokerage industry exploits this cognitive limitation as a core feature of its business model.

The Question We Forgot to Ask

When list brokers became data brokers, one question never got asked in any systematic way: should the ethical framework that permitted selling mailing lists also permit selling psychological profiles?

The original permission was for companies to share that you might want to buy gardening supplies. The current practice is companies selling predictions about whether you’re emotionally vulnerable, financially desperate, or medically compromised. These are different activities requiring different ethical frameworks, but we pretend they’re the same industry doing the same thing at larger scale.

This is the recognition that our current situation demands. Scale changes ethics. When the consequence of data sharing was receiving unwanted catalogs, implied consent and industry self-regulation might have been adequate safeguards. When the consequence is being denied employment, housing, or insurance based on inferred characteristics you never disclosed and may not actually possess, those safeguards become insufficient.

Rebuilding Ethics for the Industry We Actually Have

What would it look like to construct an ethical framework appropriate to modern data brokerage rather than its mid-century ancestor?

First, it would require acknowledging that inferred data is a distinct category from provided data. When someone fills out a survey, they understand they’re sharing information. When a company infers from their browsing patterns that they might have a substance abuse problem, that person has shared nothing. The inference was generated without their participation or knowledge. Treating these two situations as equivalent, as current practice does, represents a fundamental category error.

Second, it would require transparency about use cases. The California Consumer Privacy Act represented an initial step in this direction, requiring companies to disclose categories of information collected and purposes for collection. But the CCPA’s requirements still permit broad categories and vague purpose descriptions. Meaningful transparency would require specificity: this data will be sold to insurance companies to make coverage decisions. This data will be sold to employers to screen job applicants.

Third, and most challenging, it would require grappling with the question of whether certain inferences should be commercially tradeable at all. We already prohibit some forms of information commerce. You cannot legally sell someone’s medical records without their explicit consent. You cannot sell certain financial information without regulatory compliance. The question is whether inferred psychological profiles, health predictions, and vulnerability scores should join the category of information too sensitive for unrestricted commercial trade.

During my years working in marketing analytics, I observed how the industry’s internal conversations carefully avoided these questions. The focus remained on targeting efficiency, attribution modeling, return on ad spend. The humans being profiled appeared in these discussions as “users” or “segments,” their actual lives abstracted into data points to be optimized. This abstraction serves a psychological function for the people doing the work. It also prevents the ethical questions from ever being clearly formulated.

The path forward begins with naming what has happened: an industry built on one set of assumptions transformed into something entirely different while keeping its original ethical permissions. Addressing the resulting harms requires recognizing that we cannot regulate twenty-first century data practices with mid-twentieth century frameworks. The mailing list is gone. The psychological profile has taken its place. Our ethics need to catch up with what we’re actually buying and selling.

Picture of Wesley Mercer

Wesley Mercer

Writing from California, Wesley Mercer sits at the intersection of behavioural psychology and data-driven marketing. He holds an MBA (Marketing & Analytics) from UC Berkeley Haas and a graduate certificate in Consumer Psychology from UCLA Extension. A former growth strategist for a Fortune 500 tech brand, Wesley has presented case studies at the invite-only retreats of the Silicon Valley Growth Collective and his thought-leadership memos are archived in the American Marketing Association members-only resource library. At DMNews he fuses evidence-based psychology with real-world marketing experience, offering professionals clear, actionable Direct Messages for thriving in a volatile digital economy. Share tips for new stories with Wesley at wesley@dmnews.com.

MOST RECENT ARTICLES

Psychology says the reason you stopped trusting AI answers isn’t paranoia — it’s your brain detecting that the product was never actually built for you

If you get your money advice from these 7 sources, psychology says you’ll never actually build wealth

9 things boomers do on Netflix that Gen Z finds completely baffling

Psychology says if you stopped caring about these 8 things after 60, you’ve finally achieved genuine clarity

Psychology says people who meditate for just 10 minutes daily usually develop these 8 distinct mental strengths

Psychology says people who are great at small talk display these 8 subtle behaviors most others completely miss