Why the 2007 do-not-track proposal was designed to fail

Add DMNews to your Google News feed.

This article was published in 2025 and references a historical event from 2007, included here for context and accuracy.

  • Tension: Regulatory agencies keep proposing consumer choice mechanisms while platforms engineer systems that make meaningful choice structurally impossible.
  • Noise: The debate focuses on opt-out tools and transparency notices while ignoring that individual consent cannot counterbalance institutional data asymmetry.
  • Direct Message: The 2007 do-not-track proposal failed because it treated surveillance as a consumer preference problem rather than a power imbalance.

To learn more about our editorial approach, explore The Direct Message methodology.

In 2007, nine privacy organizations approached the Federal Trade Commission with a proposal that should sound familiar: create a do-not-track list that would let consumers opt out of behavioral advertising.

The FTC held town halls. Companies like AOL announced self-regulation initiatives. Privacy advocates called for clear definitions and easy opt-out mechanisms.

The proposal included pop-up notices, browser plug-ins, and a registry system modeled after the National Do-Not-Call List.

Eighteen years later, we’re still having versions of this exact conversation. The House privacy proposal limiting how Facebook and Google handle data represents the latest attempt to give consumers “control” over their information.

Between 2007 and now, we’ve seen the General Data Protection Regulation, the California Consumer Privacy Act, countless consent banners, privacy dashboards, and data portability tools. Yet most Americans report feeling they have little control over their personal data.

The pattern isn’t coincidental. It reveals something fundamental about how we’ve framed the privacy problem.

When choice becomes a technical impossibility

The 2007 proposal rested on a seductive premise: if consumers understood what data was being collected and had easy tools to opt out, market forces would naturally regulate behavioral tracking.

Mark Cooper from the Consumer Federation of America captured this thinking perfectly when he said consumers needed “a clear and easy opportunity to opt out” so they could “trust the system.”

This framework assumes that surveillance is ultimately a consumer preference issue. Some people value personalized advertising and will opt in.

Others prioritize privacy and will opt out. The market accommodates both groups through transparency and choice architecture.

But this model collapses when you examine how digital platforms actually function. A 2024 FTC report found that social media and video streaming companies engage in vast data collection and processing that is largely invisible to consumers, with limited ability to control or opt out of this surveillance.

The average website now connects to dozens of third-party trackers. Mobile apps request permissions that bundle necessary functionality with extensive data collection. Cloud services make it impossible to use digital tools without feeding massive data ecosystems.

The technical architecture doesn’t support meaningful choice because the platforms weren’t designed to offer it. They were designed to extract maximum behavioral data while maintaining the aesthetic appearance of user control.

The distraction of transparency theater

The 2007 proposal emphasized making information about tracking “available to all individuals” and creating systems where “the process of behavioral marketing is transparent.”

AOL’s chief privacy officer Jules Polonetsky argued that “when people are in control of the tools, they are more comfortable letting marketers use them.”

This spawned an entire industry of privacy notices, consent management platforms, and transparency dashboards.

Companies now publish detailed privacy policies running thousands of words. They offer granular control panels with dozens of toggles. They send notifications about privacy updates and data processing activities.

Yet research consistently shows that privacy policies are too long and complex for consumers to read or understand, with the average person needing 76 working days per year to read all the privacy policies they encounter. 

The transparency mechanism itself became the barrier to understanding. When every interaction requires reviewing complex disclosures and making technical decisions about data flows, the cognitive burden makes informed consent impossible.

The real distraction is treating information asymmetry as a solvable problem through better disclosure. Platforms employ teams of data scientists, privacy engineers, and behavioral designers who understand their systems completely.

Consumers get a settings menu and a privacy policy. This isn’t a gap that transparency can bridge because the knowledge required to make informed decisions exceeds what any individual can reasonably acquire or maintain.

What the failure revealed

The 2007 do-not-track proposal never materialized as envisioned. When browser-level do-not-track signals finally emerged, major advertising platforms simply ignored them or argued about what the signals meant.

The mechanism failed not because of technical limitations but because it challenged business models that had already become foundational to the internet economy.

Privacy cannot be solved through consumer choice when the systems are designed to make surveillance the default condition of participation.

The proposal’s failure illuminated something the privacy debate still struggles to acknowledge: individual consent mechanisms cannot counterbalance institutional power asymmetries.

When platforms control the infrastructure, design the interfaces, set the default configurations, and determine what choices are even available, framing the issue as consumer preference legitimizes the underlying power structure.

The do-not-track list would have required companies to respect consumer choices. But it didn’t question whether consumer choice should determine the boundaries of acceptable surveillance in the first place.

Beyond the choice paradigm

The path forward requires abandoning the premise that got us here.

Instead of asking how to give consumers better tools to opt out of tracking, we should ask what kinds of tracking should be permissible regardless of consumer consent.

This isn’t hypothetical. The European Union’s Digital Services Act and Digital Markets Act establish rules about platform behavior that don’t depend on individual consent.

They recognize that some practices create systemic harms that individual choice cannot address.

These regulations impose obligations on platforms based on their structural role in digital markets, not on whether users clicked “accept” in a consent dialogue.

The 2007 proposal failed because it tried to solve a structural problem with an individual solution.

Eighteen years later, effective privacy protection requires recognizing that surveillance isn’t just invasive because individuals didn’t consent to it.

It’s problematic because it concentrates unprecedented power in institutions that use behavioral data to shape markets, influence decisions, and modify behavior at scale.

The do-not-track list was never going to work because opting out of surveillance while still participating in digital society had already become impossible by 2007.

The platforms won that battle before most people realized it was being fought.

The question now isn’t how to restore individual choice within surveillance systems, but whether we’re willing to constrain those systems regardless of what consumers theoretically agree to.

Picture of Melody Glass

Melody Glass

London-based journalist Melody Glass explores how technology, media narratives, and workplace culture shape mental well-being. She earned an M.Sc. in Media & Communications (behavioural track) from the London School of Economics and completed UCL’s certificate in Behaviour-Change Science. Before joining DMNews, Melody produced internal intelligence reports for a leading European tech-media group; her analysis now informs closed-door round-tables of the Digital Well-Being Council and member notes of the MindForward Alliance. She guest-lectures on digital attention at several UK universities and blends behavioural insight with reflective practice to help readers build clarity amid information overload. Melody can be reached at melody@dmnews.com.

MOST RECENT ARTICLES

The rumor about Salesforce getting acquired is a distraction from the much bigger story underneath it

Taking a stand used to be bad for business — now silence is worse

The lower middle class isn't struggling because they spend too much. They're struggling because they live close enough to wealth to absorb its costs without ever accessing its returns.

The lower middle class isn’t struggling because they spend too much. They’re struggling because they live close enough to wealth to absorb its costs without ever accessing its returns.

The friends who disappeared when you stopped being the one to reach out weren't bad friends. They were showing you what the friendship actually was.

The friends who disappeared when you stopped being the one to reach out weren’t bad friends. They were showing you what the friendship actually was.

Grocery chains are using dynamic pricing algorithms that charge more in lower-income zip codes and researchers say most shoppers have no idea it's happening

Grocery chains are using dynamic pricing algorithms that charge more in lower-income zip codes and researchers say most shoppers have no idea it’s happening

The friends you made after 30 aren't replacements for the ones you lost — they're the first people who ever chose you without the pressure of proximity or obligation

The friends you made after 30 aren’t replacements for the ones you lost — they’re the first people who ever chose you without the pressure of proximity or obligation