Reality Is Internal, Revisited (2019–2026)
The On-Ramp Lesson (2019)
One evening in 2019, I pulled onto the Highway 91 on-ramp and braked at a red light out of sheer habit. In reality, the ramp meter wasn’t lit at all – traffic was flowing freely – yet I stopped for a non-existent signal. I wrote about it in 2019 on LinkedIn, titled Reality Is Internal.
Stranger still, the driver in the next lane hit her brakes as well. When I realized my mistake, I chuckled and accelerated, but the other driver glared and shook her head.
She seemed to blame me for her own unnecessary stop. Two people on the same road, in the same moment, had constructed completely different realities.
I assumed she was angry at me, and she presumably assumed I’d seen a hazard she hadn’t. We were both reacting to internal cues and stories rather than the actual state of the traffic light.
What I Got Right (and Oversimplified)
At the time, I drew a bold conclusion: “We have an inner world that affects how we see the outer world – not the other way around.” My 2019 take was that our perceptions and emotions are shaped from the inside, and we often misinterpret external situations based on internal assumptions.
I even suggested we should “develop an internal environment that reduces the assumptions made about the external environment.” That insight still feels true — our mindset can indeed color what we see. But I also oversimplified. I implied that by perfecting our “internal environment,” we could see outer reality clearly.
In hindsight, I underestimated how much the mind’s guesses, social cues, and cognitive limits always influence us. The goal isn’t to eliminate assumptions (impossible), but to update our mental map continuously as new information comes in.
Perception as Prediction
Upgrading the Model
Modern cognitive science suggests that perception itself is a form of inference and prediction. Our brains are not passive cameras; they are active prediction machines that constantly guess what’s coming and then correct those guesses with sensory data. In other words, the brain continuously generates and refines an internal model of the world, checking incoming inputs against what it expected.
This “predictive processing” framework wasn’t explicitly on my mind in 2019, but it aligns with my intuition that reality is internal. Back then, I intuited that we “misinterpret” the world often; now I understand that’s because perception is controlled hallucination.
We don’t just see what’s happening; we interpret signals through the lens of prior experience and beliefs. So what does this change about the 2019 story? It means my stopping at the dark light was my brain’s prediction (a habit pattern) overruling raw data, and the other driver’s reaction was her brain’s best guess about why I braked.
Each of us was running a silent simulation shaped by past experience. The upside: if the mind is a model-updating machine, then agency comes from improving the model, not from magically purging all bias.
Social Cues and Bandwidth
A Kinder Re-Read
Revisiting the on-ramp incident, I see the other driver with more empathy. In 2019, I chalked her glare up to “assumptions she held.” Now I recognize a common human strategy: social cueing. When uncertain, we instinctively look to others for guidance.
The classic Asch experiments showed that about 75% of people will follow a group’s obviously wrong judgment at least once. Often this isn’t because we’re blind conformists, but because we assume others might know something we don’t. On the road, if the car next to you suddenly stops, you figure there must be a reason.
In fact, research finds that drivers tend to mimic the behavior of a car in front of them at stop signs and lights, even if there’s no legal need to stop. So my fellow driver likely stopped because I did, a split-second social safety heuristic rather than a personal failing. And her annoyed look? I suspect I’d feel the same embarrassment and misplaced blame in her situation.
Our bandwidth was also a factor. I was “deep in thought” in my car, on autopilot. She might have been mentally elsewhere, too. Under cognitive load or stress, snap judgments and misreads multiply. When the mind is taxed – by distraction, anxiety, or scarcity of time – we rely on knee-jerk assumptions.
Studies show that pressing worries (like financial strain) can consume so much mental bandwidth that people’s effective IQ drops by 13 points, akin to losing a night’s sleep. In these moments, we default to habits and quick takes. We may also become more prone to attribution errors – blaming a person’s character for what’s really a situational mix-up.
For example, stress impairs the brain’s executive functions, making us more likely to fault people rather than context. With a clearer head, I might have caught myself and kept rolling through the green. With more presence, she might have trusted her own eyes instead of my behavior. Both of us were navigating with limited cognitive bandwidth, doing the brain’s favorite thing: saving energy by guessing.
Why It Matters More in 2026
Fast-forward to 2026: the world has gotten only more confusing for our poor prediction machines. Back in 2019, my misperceptions were limited by the physical here-and-now (a traffic light and a stranger).
Today, AI-generated ambiguity is everywhere. We now have deepfake videos, synthetic news articles, and bot-generated social posts clouding our information environment. In this landscape, seeing is no longer believing.
Experts warn that we’re facing not just a misinformation problem, but a broader “crisis of knowing”. When any photo, video, or quote might be fabricated, it becomes harder to trust our basic sensory inputs and social signals. This makes “reality is internal” an even thornier proposition: our internal models are only as good as the inputs and feedback we get.
The good news is there’s a growing toolbox to help us stay oriented. For instance, there’s a push for technical fixes like content provenance standards – essentially watermarks or metadata that certify who made a piece of media and whether it’s been altered.
Initiatives like the Coalition for Content Provenance and Authenticity (C2PA) are working to embed a traceable history into digital content. Meanwhile, educators and psychologists emphasize that human skills need upgrading, too. One emerging concept is critical ignoring – the art of knowing what not to pay attention to online.
We simply cannot process every alert, rumor, or “urgent” update thrown at us; the ability to filter noise is as important as traditional critical thinking. Professional fact-checkers, for example, practice lateral reading: instead of staying on a suspicious page, they open new tabs to verify the source and context elsewhere.
All these developments reinforce a humbling truth: our internal reality isn’t infallible, so we must continually refine our maps against a very mutable territory.
Sidebar:
Seven Years of Updating My Map
2019: Expressing that two drivers can interpret the same situation differently based on internal assumptions. Resolved to be more aware of my own mental narratives.
2020: Learned that perception is not passive. Embraced the idea that my brain is a prediction engine, always guessing and checking reality – which means I should question my first impressions.
2021: Recognized the cost of mental overload. Reading research on scarcity and cognition drove home how limited bandwidth leads to errors. I started building more buffer (time, pauses) before reacting under pressure.
2022: Noticed social influence everywhere. I caught myself copying others’ behaviors (online and offline) without good reason, echoing classic conformity effects. This made me less quick to judge others for “following the herd.”
2023: Adopted new info hygiene. I began practicing lateral reading and critical ignoring by default – opening extra tabs to fact-check big claims, and tuning out clickbait or outrage bait (informed by my dissertation and book, Simulated Realities).
2024: Started trusting provenance over virality. I began looking for signs of authenticity in images/videos (like verification badges or source metadata) and grew skeptical of sensational visuals unless proven authentic.
2025: Understood the “crisis of knowing.” Deepfakes and AI blurring truth made me double down on epistemic humility. I now assume my perception can be wrong even if something feels real, and I seek corroboration more than ever.
Maintaining Bandwidth for Map Updating
Three Practices
Staying oriented in an age of blinding headlights and fake road signs requires active mental maintenance. Here are three habits I’ve found invaluable:
Pause Before Attribution: When something goes wrong, resist the reflex to assign blame or motive in a split second. Give your brain a moment to consider situational factors. That other driver on the ramp? Maybe she was just following a cue, not being obtuse. A brief pause can prevent a rush to judgment shaped by stress or bias.
Triangulate Your Information: Treat surprising claims or vivid stories like they have a broken traffic light. Look at multiple sources before you decide what’s real. This is the principle of lateral reading: verify elsewhere rather than staying in one unreliable lane. In practice: open a new tab, search the quote or image, and see if it holds up under different lights.
Practice Critical Ignoring: Your attention is finite. You wouldn’t stop at every fake red light on the internet, so learn to cruise past the noise. Mute or ignore content that’s designed to hijack your emotions. Curate your feeds, use tools to block pervasive trolls or clickbait, and choose your informational diet actively. Not everything deserves a reaction or even a glance.
So, what’s worth the mental effort now?
In 2019, I ended by asking if this discussion was worth it. Today, my answer is yes – staying oriented is absolutely worth it. But we have to be smart about where we spend our mental energy.
Our internal reality will never be a perfect mirror of the world, yet by continuously updating our maps – pausing, checking, filtering, and learning – we can navigate toward truth without burning out.
In a world of endless predictive cues and AI-generated mirages, the most crucial skill may be knowing when to stop, when to go, and when to question the light entirely.

