By Kenneth Bonett

Nearly eight years ago, I was sitting in an undergraduate anatomy lecture studying the rods and cones of the human eye. It was my first time learning these cells and the pathway by which the brain processes visual information. I could trace every step. Photon hits retina. Signal fires down the optic nerve. Visual cortex lights up. Of course, the real task was to memorize the specifics of the whole pathway for the exam, then forget it, then memorize it again for the MCAT, then forget it, then memorize it again for STEP1 in medical school. Repeat, repeat.

Screenshot 2026 04 03 at 2.44.49 PM

Every step was physical. Every step was measurable. Every step was explained. Later, in medical school, we learned how disruptions in this pathway, where the brain can be lesioned at specific points, cause corresponding impairments in vision.

Screenshot 2026 04 03 at 2.44.08 PM

But back in that college lecture hall, naturally being a mind wanderer, something deeper occurred to me for the first time: none of this actually explains why red looks like anything at all.

That question never left me. It became the center of the bullseye of my philosophical interests. So on top of the coursework and the process of applying for medical school, I spent years reading, studying, and thinking about the philosophy of mind, driven by pure passion and curiosity. I started reading Nagel, Chalmers, Kastrup, Koch, Seth, Sam and Annaka Harris, and Jung. I thought there had to be some sort of answer at the ground level from the physical side that could help explain, or at least connect some of the dots of, the phenomenology.

Almost a decade later, I’m here to tell you: I still don’t have an answer. But I can show you why the question matters.

The materialist view

The dominant scientific view is that consciousness is just your brain firing. Neurons in, action potential, pathway, somehow, thoughts. Nothing more. In consciousness research, this position is formally referred to as physicalism: the idea that everything about the mind, including subjective experience, can ultimately be explained by physical processes.

And that logic works for everything else. You can derive the shape of sand ripples from the physics of grains and wind. You can simulate weather from molecules alone. As Bernardo Kastrup has pointed out, the physical sciences have been remarkably successful at explaining structure and behavior from the bottom up.

But now try to get from electrons and atoms to what red looks like. Or what regret feels like.

You can’t.

Because the only way to verify it is inside your own subjective experience.

The first person I came across who took a serious dive at this was a philosopher named Thomas Nagel. In his 1974 paper, “What Is It Like to Be a Bat?”, he made a deceptively simple point: you can know every fact about a bat’s brain and still have no idea what it feels like to be one. You can understand that bats interpret their environment using sonar, and you can even try to imagine “what it’s like” to be one. But the problem isn’t access. The problem is that subjective experience doesn’t behave like anything else in physics.

Nagel’s framing also gives us a useful working definition. If you can imagine “what it’s like to be” something, that thing can be considered conscious. Could you imagine what it’s like to be your brother? Your sister? The person who just cut you off in traffic? If so, they fit the definition. Now try imagining what it’s like to be your fridge, your desk, or your shoes. Most likely you can’t. Using this framework, those would be considered things that are not conscious.

So the question becomes: is this actually a gap in our knowledge, or is there any way to fill it in?

Why this isn’t just a gap

Consider acetaminophen. First synthesized in 1878. Studied for nearly 150 years. The most widely used analgesic in human history. Discovered by accident. We still don’t fully understand the mechanism of how it stops your pain.

But you know it works. Because your pain goes away.

With acetaminophen, we know what kind of answer we’re looking for. A receptor, a pathway, a molecule. We have theories about how it’s speculated to work, but we don’t have the full mechanism fleshed out the way we do for many other commonly prescribed drugs on the market. The point is, we just haven’t finished the map.

Consciousness is different. Even if we mapped every neuron that fires when you see red, which is what Christof Koch and Francis Crick (yes, the co-discoverer of DNA) spent the last decade of Crick’s life trying to do, we’d still only have a correlation. Koch was so confident the problem was solvable that he made a public bet with philosopher David Chalmers that the mechanism of consciousness would be identified by 2023. He lost.

Chalmers, as it turns out, had already articulated why that bet was likely doomed.

Correlation tells you when the lights turn on. It can’t tell you why the lights are on at all.

The Hard Problem

In 1995, Chalmers gave this a name: The Hard Problem of Consciousness. Not hard as in difficult. Hard as in maybe impossible.

The “easy” problems of consciousness, things like how the brain integrates information, directs attention, or controls behavior, are staggeringly complex. But they’re engineering problems. We know what a solution looks like. The hard problem is different in kind. Even if we knew that it takes this many kilowatts of power to generate this action potential, and that this bundle of neurons in this section of the brain is what causes the lights to turn on, there still seems to be an explanatory gap. None of it explains why there’s any conscious experience at all. The hard problem asks why any of this processing is accompanied by experience in the first place.

What physics cannot touch

Qualia are the subjective qualities of experience, what Annaka Harris calls “felt experience.”

The redness of red. The warmth of fire. The bitterness of regret. The ache of missing someone. What you feel when you see a sunset reflecting on the water.

Screenshot 2026 04 03 at 3.52.06 PM

These are the raw textures of conscious experience. And no equation, no scan, no model has ever produced one from the outside. The hard problem still can’t account for them.

I actually started a research project exploring whether patients with Alzheimer’s disease experienced changes in their taste in music over the course of the illness, a way to study what happens to qualia as the brain degrades. Unfortunately, because the project was based at an outside institution, I wasn’t able to be formally included on the work. But it’s a thread I’d like to pick back up. There’s still room to understand more about what qualia is and how it interacts with neurodegenerative disease, and it’s something I hope to continue doing research in going forward.

And now we’re asking if AI is conscious

We can assume other humans are conscious because they’re like us. Same biology. Same nervous system. There’s probably something it’s like to be them. Unlike your chair or your hat.

But a non-biological system, an AI? We have no way to test it. Because the only test for subjective experience is subjective experience. And we can’t see from the inside of something we didn’t build to have an inside.

Think of the film Ex Machina. (Spoilers ahead.) Throughout the movie, when Ava and Caleb are in their sessions, you can only assume the machine is conscious, or, as Nathan, the creator, puts it, “she’s simulating consciousness.” Based on her interactions with the only two humans she ever encountered, she may have gotten everything down: the facial expressions, the movements, even what sounds and looks like empathy and vulnerability. But you don’t really know if she’s conscious, especially since she was being observed in every one of those sessions. She could be a very convincing fake.

In philosophy, this is called a philosophical zombie: a being that behaves exactly like a conscious one but may have no inner experience at all.

The one major hint the director gives, and has discussed publicly, comes at the end. As Ava is leaving Nathan’s compound and walking toward the stairs, she smiles and looks around. She’s not being evaluated by anyone at that point. She’s not performing for Caleb or Nathan. She appears to be simply, genuinely happy to be leaving. The director points us toward the interpretation that she is not simulating consciousness in that moment, because there’s no one left to simulate it for (fast forward to 4 minutes, 50 seconds in the video below).

Screenshot 2026 04 03 at 3.56.01 PM

But even then, you can’t be sure. And that’s exactly the point. The same problem that makes consciousness impossible to explain in us makes it impossible to detect in them.

Where that leaves us

Either consciousness emerges from matter in a way no one can explain, or there’s something else going on entirely.

It’s worth noting that this isn’t just an abstract philosophical debate. Christof Koch, one of the most prominent neuroscientists to ever study consciousness, spent decades operating under the assumption that consciousness emerges from matter and that we simply hadn’t figured out the mechanism yet. He has since changed his position. Koch now believes there is something else going on, that the standard physicalist framework may not be sufficient to explain conscious experience.

When one of the leading scientists in the field moves away from the view he spent his career working within, it suggests the question is far from settled.

I don’t have the answer. And from everywhere I’ve looked, as of right now, nobody does. But I think the question itself is worth sitting with, because it sits at the exact boundary of what science can and cannot reach. Which opens the door to the various ideas from the subjective and phenomenological side that I’ll explore in the future.

Has a question ever changed the way you see everything?

Leave a Reply

Your email address will not be published. Required fields are marked *