Tension: We’ve outsourced our moral development to recommendation engines that optimize for engagement rather than wisdom, creating value systems built on what keeps us scrolling.
Noise: Tech companies frame personalization as serving your interests while actually training you to adopt the beliefs and preferences that generate the most profitable user behavior.
Direct Message: Your values should emerge from lived experience and genuine reflection, not from content designed to trigger strong reactions that keep you on platform longer.
To learn more about our editorial approach, explore The Direct Message methodology.
You open TikTok intending to watch a quick video about sourdough bread. Twenty minutes later, you’ve developed strong opinions about minimalism, productivity hacks, and why everyone should wake up at 5 AM.
You weren’t looking for life advice, but the algorithm decided you needed it anyway. So, whether you agreed to it or not, your feed becomes increasingly populated with content that reinforces specific worldviews.
The videos that appear aren’t random — they’re the result of sophisticated systems analyzing your engagement patterns and serving you more of whatever kept you watching longest. The algorithm has determined what you care about based not on what you say you value, but on what makes your thumb pause mid-scroll.
The unsettling part is how natural it feels. You start absorbing perspectives and adopting stances on topics you’d never consciously chosen to explore.
The content arrives with such frequency and consistency that it begins to feel like consensus, like everyone agrees these things matter. The platform is teaching you to care about these things by repeatedly showing you that other people do.
How preference becomes conviction
Traditional value formation happened through slower processes: conversations with people you trusted, experiences that challenged your assumptions, books you chose deliberately, mistakes that taught you what actually mattered to you.
Algorithmic value formation works differently. When translating research into practical applications, I’ve noticed how recommendation systems essentially function as operant conditioning at scale. They’re training you to adopt specific value systems by rewarding engagement with those systems through dopamine hits and social validation.
The platforms have figured out that moral certainty drives engagement. Content that presents clear good-versus-evil narratives, that identifies villains and heroes, that makes you feel righteous anger or virtuous agreement keeps you watching longer than nuanced discussion.
So the algorithm learns to serve you increasingly polarized content because polarization is good for metrics.
You watch one video about sustainable fashion. Suddenly your entire feed is filled with content about fast fashion’s environmental impact, ethical consumption, and why your shopping habits are destroying the planet.
You didn’t seek out this moral framework. The algorithm built it around you, video by video, until caring intensely about this issue feels like a core part of your identity.
Within days, you can develop strong opinions about complex topics based entirely on curated content designed to make you feel certain. There’s no time for the doubt and questioning that usually accompanies genuine value development.
The disguise of personalization
Platforms frame algorithmic curation as personalization serving your authentic interests. “Because you watched” or “Recommended for you” suggests the content reflects who you already are rather than actively shaping who you’re becoming.
Tech companies have weaponized the language of authenticity and self-discovery to disguise what’s actually a sophisticated behavior modification system. “Find your people” really means “we’ve identified a demographic segment you’ll engage with consistently.”
Social validation amplifies this effect. When you see thousands of comments agreeing with a perspective, when creators you admire all share similar views, when the most popular content consistently reinforces certain values, it becomes psychologically difficult to maintain different beliefs.
The algorithm isn’t just showing you content. It’s showing you what appears to be social consensus, making dissent feel like isolation.
Meanwhile, nuance doesn’t perform well in recommendation systems. Complexity doesn’t generate shares. Content that makes you pause and think rather than immediately react gets deprioritized in favor of content that triggers strong emotional responses. Your feed becomes increasingly populated with certainty and stripped of ambiguity.
What gets lost in the optimization
Values developed through algorithmic exposure lack the depth and resilience of values formed through lived experience because they’re optimized for your immediate engagement rather than your long-term flourishing, creating belief systems that feel intensely important but collapse under real-world complexity.
Reclaiming deliberate value formation
Recognizing when your beliefs are being shaped by recommendation systems rather than genuine reflection requires developing what I call “algorithmic awareness”. This is the ability to notice when your strong feelings about a topic correlate suspiciously with heavy exposure to content about that topic.
Examine which issues feel most urgent to you right now.
Then trace backward: when did you start caring about this? Was it after sustained exposure to content about it? Are your friends outside social media talking about this issue, or is your sense of its importance primarily driven by your feed?
The question isn’t about dismissing concerns that arrive through digital channels, but about distinguishing between values you’ve genuinely adopted and values the algorithm has trained you to perform.
Practical steps involve intentionally disrupting the feedback loops. Seek out long-form content that doesn’t optimize for engagement. Have conversations with people whose views you don’t already know.
Read books chosen through deliberate research rather than algorithmic recommendation. Spend time in offline spaces where your values get tested against actual human complexity.
What I’ve seen work in resilience workshops is the practice of “belief auditing,” where people periodically examine their strongest convictions and ask: what experiences have I had that confirm this belief? Not what content have I consumed, but what have I actually lived through that makes this true for me?
Values grounded in lived experience tend to be more nuanced, more flexible, and more durable than values acquired through curated content.
Building what researchers call “value coherence”— the alignment between what you profess to believe and how you actually live — also matters.
Algorithmic values often exist primarily in the digital realm. You might have strong opinions about minimalism based on hours of content consumption while your actual living space remains cluttered. The disconnect reveals values you’ve performed rather than integrated.
Choosing depth over algorithmic certainty
The algorithms aren’t going anywhere, and they’re only getting more sophisticated at predicting what will keep you engaged. Expecting platforms to prioritize your wellbeing over their business model is naive.
What you can control is how deliberately you engage with the value systems being offered through your feed.
Which beliefs have you genuinely developed through experience, and which have you absorbed through repeated exposure to algorithmically curated content? The honesty required here is uncomfortable but necessary.
Your feed will keep serving you certainty packaged as authenticity. The real work involves developing the discernment to recognize when you’re being shaped by systems optimized for profit rather than wisdom, and choosing deliberate value formation over the path of least resistance.
That work is harder and slower. It also produces beliefs that actually belong to you.