Why a patent attorney and a middle school teacher have completely rational but opposite views on AI

Why a patent attorney and a middle school teacher have completely rational but opposite views on AI

The Direct Message

Tension: AI experts and the general public disagree about AI’s impact by a 50 percentage point margin — yet both are accurately reporting on the technology they’ve actually used. The same word, ‘AI,’ now refers to two completely different experiences depending on whether you pay $200 a month or use the free version.

Noise: The debate is framed as optimists vs. skeptics, informed vs. uninformed. But the real divide is experiential, not intellectual — power users and casual users are encountering such different products that they can no longer have a meaningful conversation about the same technology.

Direct Message: The AI opinion gap isn’t a knowledge problem that more information can fix. It’s an experience partition: two tiers of technology producing two irreconcilable realities, and until people encounter the same tool performing at the same level, the argument isn’t about AI at all — it’s about which AI you happened to meet.

Every DMNews article follows The Direct Message methodology.

In geology, a fault line is where two tectonic plates press against each other with such force that the ground above appears solid until the moment it isn’t. The fracture between how different people experience artificial intelligence in 2026 has become something like that: invisible on the surface, enormous underneath, and widening with a pressure that most public conversations about AI fail to register.

But this is not a story about disagreement. It is a story about economic stratification masquerading as disagreement. The emerging fault line in AI isn’t between optimists and skeptics, or between experts and the public. It is between people who can afford to use the real thing and people who are drawing rational conclusions from an inferior version of it. The result is a society splitting into two confident, well-reasoned, mutually incomprehensible camps — not because one side is wrong, but because the technology itself delivers fundamentally different realities depending on what you pay.

Consider a patent attorney who pays for access to premium large language models, using them to draft patent claims, analyze prior art across multiple jurisdictions, and generate research summaries that once took junior associates a full week. These tools can functionally double professional throughput, completing in three days what previously required six.

Meanwhile, educators trying free versions of chatbots to help plan lessons often encounter outputs riddled with factual errors, impossible experiment suggestions, and hallucinated citations. Many close the tab and return to printed curriculum guides, concluding the technology is overhyped.

These aren’t stupid reactions or uninformed ones. They are both rational responses to the technology actually encountered. And that is the problem.

Research has quantified what many have felt anecdotally: the gap between how AI experts and the general public view this technology has become a chasm. Studies show substantial divergence between expert and public opinion on AI’s impact on employment, creating a gulf wider than partisan splits on most political issues.

The easy explanation is that experts know more and regular people know less. But this misreads the situation badly. Both groups are drawing conclusions from direct experience. They just happen to be experiencing entirely different technologies wearing the same name.

AI technology divide
Photo by cottonbro studio on Pexels

Consider what professionals access when they open paid subscriptions. They’re using models that have been updated within the past few weeks, fine-tuned for professional tasks, with context windows large enough to process entire document portfolios. Recent improvements in coding, mathematics, and research domains have been substantial.

Free-tier products, by contrast, may be months behind the current state of the art. The difference between free and paid AI in 2026 is not like the difference between economy and first class on an airplane, where you get the same destination with better legroom. It is more like the difference between a bicycle and a car. Same general category of transportation. Completely different experience of the world.

This creates what might be called an experience partition, a condition where two groups of people are technically discussing the same product but functionally talking about different realities. And the partition doesn’t just separate individuals. It separates entire professional classes, income brackets, and institutional tiers — reinforcing the very economic lines it runs along.

Research describes the underlying issue as a jagged frontier of capability. Advanced AI systems have achieved remarkable feats in structured mathematical domains, yet fail at seemingly simple tasks like reading analog clocks. This is not a minor inconsistency. It is a defining characteristic of the technology in its current form: extraordinary competence in structured domains, punctuated by failures so elementary they border on absurd.

For power users, the jagged frontier is manageable. They have learned where the edges are. They know which tasks to hand off and which to supervise heavily. They have developed a mental map of AI’s reliability zones, the way a veteran pilot knows which weather patterns to fly through and which to route around. This knowledge is itself valuable, and it is unevenly distributed.

For casual users, the jagged frontier is just unreliability. Without time to develop a mental map, trying a tool once that fails in an embarrassing way leads to a reasonable decision to stop using it. Skepticism in these cases is not ignorance. It is pattern recognition based on direct experience.

The communication gap this creates touches something we’ve previously examined in the context of how technology reshapes the way people talk to each other. When two people use the same word to mean fundamentally different things, conversation doesn’t just become difficult. It becomes performative. Each side talks past the other while believing they’re being perfectly clear.

digital divide technology
Photo by Ron Lach on Pexels

The infrastructure underpinning all of this carries its own concentrations. The United States hosts thousands of data centers, far more than any other country. Chip fabrication for advanced AI processors is concentrated in a small number of facilities, with TSMC in Taiwan producing a significant portion of leading-edge AI chips. The physical architecture of artificial intelligence is staggeringly centralized, even as the conversational architecture around it fragments into warring camps of optimists and skeptics who increasingly can’t hear each other.

This fragmentation mirrors a broader pattern in how people now relate to institutions and information sources. The erosion of shared epistemic ground, where trust in news, government, and social feeds has become increasingly individualized, finds a perfect analog in AI discourse. People trust what they’ve touched. And they’ve touched wildly different things.

But AI accelerates the pattern in a way that previous technology debates did not. Earlier transitions — the internet, smartphones, social media — gave the public time to catch up. The gap between older free chatbots and current premium research agents is so large that catching up requires not just using a better tool but fundamentally revising one’s model of what the tool is. And each quarterly model release that improves professional tools while leaving free tiers unchanged pushes the two camps further apart, compounding the divide faster than any public conversation can close it.

The result is a kind of cognitive stratification that maps loosely but meaningfully onto existing economic lines. The people most likely to pay substantial monthly fees for premium AI tools are people who already earn enough to justify the expense, people in knowledge-work professions where AI’s strengths happen to be most pronounced. The people encountering AI through free products and viral social media demos are getting a version of the technology that is, in many cases, genuinely less capable and less current. Each positive experience makes a power user more likely to invest time learning the tool’s edges, which makes them more effective, which makes them more optimistic, which makes them more likely to pay for the next upgrade. Bad experiences create the opposite spiral.

There is something quietly destabilizing about a technology whose quality varies so dramatically based on what you pay. We accept this in cars and restaurants. We do not typically accept it in technologies that are being positioned as foundational to the economy, to education, to healthcare. When a tool is described as civilization-defining but delivers fundamentally different results to different economic tiers, the conversation about that tool will inevitably split along the same lines.

Some of this recalls the way people signal their relationship to technology more broadly, often without realizing it. The habits people maintain on their phones communicate volumes about when and how they entered the digital world. AI is following the same trajectory, but faster. Your opinion of AI now says less about the technology and more about which version of it you’ve been exposed to.

As one observer noted on X, the discourse around AI has become a conversation where neither side is wrong, but both sides think the other is. The optimists aren’t deluded. The skeptics aren’t Luddites. They are each reporting accurately on the tool they’ve actually used. Investment analysts put it in stark terms: the risk isn’t that AI is overhyped or underhyped. The risk is that it’s both at once, in different rooms, and the rooms don’t share a door.

The gap measured in recent research is not a disagreement about the future. It is a disagreement about the present that masquerades as a disagreement about the future. When an AI expert says the technology will transform work, she means the version she used this morning. When a teacher says the technology is unreliable, he means the version he used last month. Both are right. Neither knows the other is also right.

The honest reading of available data points toward something uncomfortable. The technology works well enough to be genuinely important. It works poorly enough to be genuinely dangerous. And it is unevenly distributed enough that the people best positioned to explain its power are increasingly unable to communicate with the people most likely to be affected by it. These three facts coexist, and they do not resolve into a tidy narrative. But they do resolve into a warning.

We have built a technology that sorts people into separate realities based on their ability to pay, then asks them to have a shared public conversation about what it means. That conversation is failing — not because anyone is lying, but because the shared object the conversation requires does not exist. There is no single “AI” to have opinions about. There is the AI the patent attorney uses and the AI the teacher tried, and the distance between them is growing faster than the discourse can track.

If the question is whether artificial intelligence will deepen existing inequality or help close it, the answer is already arriving, and it is arriving in the form of a subscription paywall. The people who most need to understand what this technology can do are the least likely to encounter it at its best. The people evangelizing its potential are the least likely to understand why their enthusiasm sounds, to everyone else, like noise. And the fault line between them is not a problem of communication or education. It is a problem of architecture — of a system that made the quality of a foundational technology contingent on the ability to pay, then acted surprised when the public fractured along exactly that line.

The ground looks solid. It is not.

Picture of Direct Message News

Direct Message News

Direct Message News is a psychology-driven publication that cuts through noise to deliver clarity on human behavior, politics, culture, technology, and power. Every article follows The Direct Message methodology. Edited by Justin Brown.

MOST RECENT ARTICLES

I'm 52 and just now learning that my obsession with productivity was never about ambition — it was about earning the right to exist

I’m 52 and just now learning that my obsession with productivity was never about ambition — it was about earning the right to exist

Creators didn’t invite AI to the table, but it already pulled up a chair

Why the people most committed to self-improvement are often the least at peace — and what that reveals about the industry selling it to them

Lebanon has no cards at the Israel negotiating table — and everyone, including its president, knows it

Lebanon has no cards at the Israel negotiating table — and everyone, including its president, knows it

170 civilians killed in military boat strikes and the budget lever keeping oversight silent

170 civilians killed in military boat strikes and the budget lever keeping oversight silent

Movie theaters are selling ads to an empty room — and Sony's CEO just said it out loud

Movie theaters are selling ads to an empty room — and Sony’s CEO just said it out loud