👁️
How Your Brain Sees the World
The full visual processing pipeline, from photon to perception.
Spoiler: you're not seeing reality. You're watching a movie your brain is rendering in real time.
The Big Picture
Right now, photons are bouncing off your screen, passing through a lens made of transparent living cells, landing on a postage-stamp-sized patch of neural tissue at the back of your eye, getting compressed 127-to-1, shot down a cable of one million nerve fibers, relayed through a switchboard that your cortex is actively controlling, split into parallel processing streams for edges, color, motion, and objects, recombined into a unified conscious experience, and presented to "you" approximately 200 milliseconds after it actually happened.
And your brain does all of this while simultaneously telling you it's effortless. That's the real illusion.
By The Numbers
~100B
Neurons in the human brain
Roughly 20 billion of those are in the cerebral cortex. More connections than stars in the Milky Way. And yet you still can't find your keys.
30%
Brain cortex dedicated to vision
Compare that to 8% for touch and 3% for hearing. Your brain is basically a visual processing unit with some side features.
10M
Bits per second from retina to brain
Your optic nerve transmits roughly the bandwidth of an ethernet cable. But your conscious awareness bottlenecks at about 40 bits per second. The rest is filtered, compressed, and guessed at.
127M
Photoreceptors in each retina
120 million rods (black and white, motion) plus 6-7 million cones (color). All packed onto a surface the size of a postage stamp.
~200ms
Time from photon hitting retina to conscious perception
Everything you see happened a fifth of a second ago. You are literally always living in the past. Your brain predicts the present to compensate.
3-5
Saccades (eye jumps) per second
Your eyes aren't smoothly scanning the world. They're making rapid jumps, and your brain stitches the snapshots together into a smooth movie. You're watching a slideshow and don't know it.
~90%
Visual data discarded before it reaches consciousness
Your brain throws away the vast majority of incoming visual information. What you 'see' is a heavily curated highlight reel, not raw footage.
170ms
Time to recognize a face
Your fusiform face area can identify a face faster than you can blink. This dedicated neural hardware is why you see faces in toast, clouds, and electrical outlets.
The Visual Processing Pipeline
Eight stages from photon to perception. Each one is an opportunity for your brain to take a shortcut, make an assumption, or flat-out hallucinate.
Light Enters the Eye
Photons pass through your cornea and lens, which focus light onto the retina at the back of your eye. The image is flipped upside-down and reversed left-to-right. Your brain flips it back so seamlessly you never notice.
Fun Fact
Your pupil dilates up to 8mm in darkness and constricts to 2mm in bright light, controlling light intake by a factor of 16. It also dilates when you look at someone you find attractive. Your eyes are terrible at keeping secrets.
Photoreceptors Fire
The retina contains 127 million photoreceptors. Rods handle low-light and peripheral vision (that's why you see better in the dark from the corner of your eye). Cones handle color and fine detail, concentrated in the fovea, a pit in the center of your retina just 1.5mm across.
Fun Fact
You have a blind spot where the optic nerve exits the retina. No photoreceptors there at all. Your brain literally invents visual information to fill the gap. Right now, part of what you're 'seeing' is fabricated.
Retinal Processing (Yes, In the Eye)
Before signals even leave the eye, retinal ganglion cells process the raw data. They perform edge detection, motion detection, and contrast enhancement. Your retina is not a camera sensor - it's a preprocessor with its own neural network.
Fun Fact
The retina is technically part of your brain. During embryonic development, the retina forms from the same tissue as the brain and pushes outward. Your eyes are literally brain tissue exposed to the outside world.
The Optic Nerve Highway
One million nerve fibers per eye carry processed signals to the brain. The signals partially cross at the optic chiasm: the left half of each eye's visual field goes to the right hemisphere and vice versa. This crossover is why brain damage to one hemisphere affects vision on the opposite side.
Fun Fact
The optic nerve compresses 127 million photoreceptors into 1 million fibers, a 127:1 compression ratio. JPEG wishes it could be this efficient.
Thalamus Relay (LGN)
Signals hit the Lateral Geniculate Nucleus in the thalamus, which acts as a switchboard. It organizes the visual information into layers: motion, color, and fine detail are separated into parallel processing streams. It also receives feedback from the cortex, meaning higher brain areas can influence what gets passed forward.
Fun Fact
About 80% of the input to the LGN comes from the cortex, not the eyes. Your brain is telling your eyes what to look for more than your eyes are telling your brain what they found. Perception is top-down, not bottom-up.
Primary Visual Cortex (V1)
Signals arrive at V1 at the back of your head. Here, neurons respond to specific orientations, spatial frequencies, and contrasts. The visual field is mapped retinotopically, meaning neighboring points in the world activate neighboring neurons in V1. This is where edges, lines, and textures are extracted.
Fun Fact
V1 neurons are so specialized that individual cells respond only to lines at specific angles. Hubel and Wiesel won the Nobel Prize in 1981 for discovering this. A single line tilted 10 degrees activates a completely different set of neurons.
Higher Visual Areas (V2, V3, V4, V5/MT)
Beyond V1, information splits into two streams: the ventral stream ('what pathway') flows toward the temporal lobe for object recognition, and the dorsal stream ('where pathway') flows toward the parietal lobe for spatial awareness and motion. Color processing happens in V4. Motion processing happens in V5/MT.
Fun Fact
Damage to V4 causes cerebral achromatopsia, where you can see fine but the world has no color. Damage to V5/MT means you can see static objects but can't perceive motion. A cup of coffee being poured appears as a frozen series of snapshots, not a continuous flow.
Conscious Perception (The Hard Part)
After all this processing, your brain constructs a unified visual experience. It combines edges, colors, motion, depth, and object recognition into a seamless scene. The binding problem, how all these parallel streams combine into a single experience, is one of the biggest unsolved questions in neuroscience.
Fun Fact
By the time you consciously 'see' something, your brain has already processed it, made predictions about it, and prepared motor responses. Consciousness is the press conference after the decisions have already been made.
Why Your Brain Takes Shortcuts
Your brain receives 10 million bits per second but can only consciously process about 40. The gap between those two numbers is where optical illusions live.
Size Constancy
Your brain automatically adjusts perceived size based on estimated distance. A car 100 meters away projects a tiny image on your retina, but you perceive it as full-sized because your brain scales it up based on depth cues.
Why It Matters
Without this, the world would look like a funhouse mirror every time something moved closer or farther away.
Exploited By
The Ponzo Illusion, Ames Room, and the Moon Illusion all hack size constancy by feeding your brain wrong depth information.
Color Constancy
Your brain subtracts the ambient illumination to determine the 'true' color of objects. A white shirt looks white in sunlight, fluorescent light, and candlelight, even though the wavelengths reaching your eye are completely different in each case.
Why It Matters
Without color constancy, every color would shift dramatically every time the lighting changed. You wouldn't recognize your own clothes.
Exploited By
The Dress (2015) broke the internet because different brains made different assumptions about the lighting, causing some people to see blue/black and others white/gold from the same photo.
Filling In (Amodal Completion)
Your brain fills in information behind occluding objects. When a cat walks behind a fence, you perceive a continuous cat, not a series of cat fragments between slats.
Why It Matters
Without filling in, the visual world would be full of unexplained gaps and fragments. You'd lose track of objects every time something passed in front of them.
Exploited By
The Kanizsa Triangle exploits this by placing pac-man-shaped cutouts where triangle corners would be. Your brain fills in edges that don't exist.
Prior Assumptions (Bayesian Inference)
Your brain uses prior experience to resolve ambiguous inputs. It asks: 'What is the most likely explanation for this pattern of light?' and goes with the answer that has worked most often in the past.
Why It Matters
This makes perception fast but biased. Your brain isn't objectively analyzing the data. It's running a probability calculation weighted heavily toward past experience.
Exploited By
The Hollow Face Illusion works because faces are almost always convex. Your brain's prior 'faces pop outward' is so strong that it overrides stereo depth cues even when you're holding the hollow mask.
Edge Priority
Your brain processes edges and boundaries far more than uniform surfaces. The Cornsweet Illusion proves this: a subtle edge gradient makes two identical surfaces look like completely different shades because your brain extrapolates the edge signal across the entire surface.
Why It Matters
This is why line drawings are instantly recognizable. Your brain needs edges more than surface detail to identify objects.
Exploited By
Makeup contouring, the Cornsweet Illusion, and the Mach Band effect all exploit edge-priority processing.
Motion Detection Bias
Your brain is wired to detect motion, even when there is none. Peripheral vision is especially motion-sensitive, a survival adaptation for detecting threats approaching from the side.
Why It Matters
This is why static patterns with asymmetric contrast gradients (like Rotating Snakes) trigger motion signals. Your peripheral vision's hair-trigger motion detection fires on false positives.
Exploited By
Akiyoshi Kitaoka's motion illusions, the Waterfall Effect, and any scrolling pattern that seems to 'breathe' in your peripheral vision.
Edge Detection: Your Brain's Favorite Cheat Code
Why edges, not surfaces?
Your visual cortex prioritizes edges and boundaries over uniform surfaces because edges carry the most information per pixel. The boundary between two regions tells your brain about shape, depth, object identity, and movement. A uniform surface tells it almost nothing. This is why a simple line drawing of a face is instantly recognizable, but a blurred photograph of the same face at the same resolution might not be.
Lateral inhibition: how edges get sharpened
Adjacent neurons in your retina inhibit each other. When a bright area borders a dark area, the neurons on the bright side suppress their dark-side neighbors, and vice versa. This creates an exaggerated contrast at the boundary: the bright side looks brighter and the dark side looks darker than they actually are. This is called the Mach Band effect, and it's happening in your retina before the signal even leaves your eye.
The Cornsweet consequence
Because your brain extrapolates surface properties from edges, you can trick it by manipulating just the edge. The Cornsweet Illusion places a gradient only at the boundary between two identical gray surfaces. Your brain reads the edge gradient and concludes "left side is darker, right side is lighter," then paints the entire surfaces accordingly. Two identical grays look completely different because your brain trusted the edge over the surface.
Depth Perception: Building 3D From 2D
Binocular Disparity
BinocularYour two eyes see slightly different images. Your brain calculates depth from the difference. This only works for objects within about 20 feet. Beyond that, the difference is too small to measure.
Motion Parallax
MonocularWhen you move your head, nearby objects shift more than far ones. This is why you bob your head when trying to judge distance. Even pigeons do this while walking.
Texture Gradient
MonocularA gravel road has visible individual stones near you but blurs into a smooth surface in the distance. Your brain uses this texture density change as a depth ruler.
Atmospheric Perspective
MonocularDistant mountains look blue and hazy because light scatters through more atmosphere. Your brain learned 'blurrier and bluer = farther away' and painters have exploited this since the Renaissance.
Occlusion
MonocularIf object A covers part of object B, A is in front. The simplest depth cue and the hardest to fool. This is why the Penrose Triangle is so unsettling: the occlusion cues contradict each other.
Linear Perspective
MonocularParallel lines converge toward a vanishing point. Railroad tracks, hallways, roads. Your brain interprets convergence as depth, which is why the Ponzo Illusion works.
Face Recognition Bias: The Fusiform Gyrus Problem
Your brain has a dedicated neural area for face recognition: the fusiform face area (FFA). It can identify a face in about 170 milliseconds, faster than you can consciously decide what you're looking at. This is not a general object recognition system. It's specialized hardware, evolutionarily optimized because recognizing faces (friend or foe, angry or happy) was critical for survival.
The downside: your brain is so eager to find faces that it sees them everywhere. This is called pareidolia. Electrical outlets have surprised expressions. The front of a car is a face. A rock formation on Mars is definitely a human face (NASA had to do a whole press conference about this). The moon has a face. Toast has a face. Everything has a face because your fusiform gyrus has a zero-tolerance policy for missing one.
This is also why the Hollow Face Illusion is so powerful. Your face-processing system is so hardwired that it overrides raw depth data from your eyes. You can hold a concave mask, feel that it's hollow, and your visual system will still render it as convex. The face hardware does not accept "concave face" as valid input. It would rather hallucinate a normal face than accurately represent a weird one.
Interesting exception: People with schizophrenia are often immune to the Hollow Face Illusion. This suggests that their top-down prediction systems work differently, with less influence from prior expectations. In this one specific case, a disrupted prediction system actually produces more accurate perception.
Motion Processing: Detecting What Isn't Moving
The survival priority of motion
Your brain dedicates an entire visual area (V5/MT) to motion processing. Evolutionarily, detecting motion was life-or-death: a rustle in the grass could be a predator, a shadow moving overhead could be a hawk. Your peripheral vision is especially motion-sensitive because threats usually approach from the side, not the center.
Why static images appear to move
Akiyoshi Kitaoka's "Rotating Snakes" illusion works because repeating asymmetric color sequences (black-blue-white-yellow) trigger sequential firing in motion-detection neurons. Your peripheral vision, optimized for sensitivity over accuracy, interprets these micro-signals as genuine rotation. The image is completely static, but your motion processing system is so eager to find movement that it creates it from contrast patterns.
The waterfall aftereffect
Stare at a waterfall for 30 seconds, then look at a stationary surface. The surface appears to drift upward. This happens because your downward-motion neurons fatigue from sustained firing. When you look away, the still-fresh upward-motion neurons dominate, creating a net upward signal from a static scene. Your brain is literally reporting motion where there is none. Same reason the ground feels weird after stepping off a treadmill.
Color Constancy: Your Brain's White Balance
A white piece of paper reflects completely different wavelengths under sunlight (bluish), fluorescent lighting (greenish), and candlelight (yellowish). But you perceive it as white in all three conditions. Your brain is running an automatic white-balance algorithm, subtracting the estimated illumination to extract what it believes is the object's "true" color.
This is incredibly useful. Without it, every time you walked from indoors to outdoors, every color would dramatically shift. But it's also a source of error. When the lighting is ambiguous, different brains make different guesses about the illuminant, leading to genuinely different color perceptions of the same stimulus. This isn't a difference of opinion. It's a difference of neural computation. This is exactly what happened with The Dress in 2015.
Paint shopping tip: This is why paint samples look different in the store versus your living room. The store has fluorescent lighting; your home has warm incandescent. Your brain's color constancy algorithm adapts in both contexts but makes different adjustments, so the "same" color looks different. Always test paint samples in your actual room under the actual lighting.
Glen's Take
Here's the part that gets me: your brain isn't showing you the world. It's showing you a heavily edited, aggressively compressed, prediction-heavy reconstruction of the world. It fills in your blind spot. It stabilizes the image during saccades. It color-corrects for lighting. It predicts where moving objects will be 200ms from now because by the time you "see" something, it already happened.
Consciousness is the press conference after the decisions have already been made. Your brain makes the call, then presents it to "you" as if you were there for the whole process.
Optical illusions aren't bugs. They're feature demonstrations. They show you exactly where your brain's compression algorithm trades accuracy for speed. And the trade is worth it — because a brain that processed every pixel accurately would be too slow to dodge a thrown object, catch a ball, or notice a snake in the grass.
FAQ
How much of the brain is dedicated to vision?
Approximately 30% of the cerebral cortex is dedicated to visual processing, making vision by far the dominant sense in humans. Compare that to roughly 8% for touch and 3% for hearing. Your brain is, architecturally speaking, a visual processing machine with some bonus features bolted on.
Why does the brain take visual shortcuts?
Bandwidth. The retina sends about 10 million bits per second to the brain, but conscious awareness can only handle about 40 bits per second. To bridge this 250,000:1 gap, your brain uses heuristics: size constancy, color constancy, edge priority, Bayesian priors, and filling in. These shortcuts are right 99% of the time. Optical illusions are the 1%.
Is what we see actually reality?
No. What you consciously perceive is a reconstructed model that your brain builds from incomplete data. It fills in your blind spot, stabilizes images during eye movements, adjusts colors for lighting, predicts object positions, and invents details in your peripheral vision. Neuroscientists sometimes call conscious perception a "controlled hallucination" — one that usually matches reality closely enough to keep you alive.
Get Glen's Musings
Occasional thoughts on AI, Claude, investing, and building things. Free. No spam.
Unsubscribe anytime. I respect your inbox more than Congress respects property rights.
Know someone who'd find this fascinating?
Keep Exploring
20 Optical Illusions That Break Your Brain
The 20 most mind-bending illusions ranked by brain-break level with the neuroscience behind each one.
Read moreGift GuideBest Brain Teaser Books & Gifts
15 optical illusion books, puzzles, and gifts ranked. From M.C. Escher to impossible object puzzles.
Read moreSmartest Animals Ranked
25 animals ranked by intelligence. Pigs play video games. Crows use tools. Octopuses escape jars.
Read moreViral Internet Legends
The accidental celebrities who broke the internet.
Read more