Can a Digital Reality Be Jacked Directly Into Your Brain?

The idea of a synthetic experience uploaded to the mind has been a sci-fi fantasy forever. New brain-computer interfaces are making it nonfiction—very slowly.

A young man in a gray flannel robe sits calmly at a table, in front of a featureless black box. He’s wearing a cap that looks like it’s made of gauze bandages. A bundle of wires snakes out of it, emerging from the back of his head. He’s waiting for something.

researcher in a white lab coat walks up to the table and stands silently for a moment. The man stares at the box. For a moment, nothing happens. Then the man blinks and appears slightly abashed. The researcher asks what happened.

“Just for the very first second,” he says, “I saw an eye—an eye and a mouth.”

The researcher swaps the box for a different object. This time it’s an orange soccer ball. There’s a beat, and again it’s clear that something has happened inside the man’s head. “How do I explain this?” he says. “Just like the previous one, I see an eye—an eye and a mouth, sideways.”

Strictly speaking, this man is a cyborg. His fusiform gyri, meandering ridges that run along the bottom of the brain on each side, are studded with electrodes. His doctors implanted them because they thought they’d help trace the cause of the man’s seizures. But the electrodes also offer a rare opportunity—not just to read signals from the brain but to write them to it. A team of neuroscientists, led by MIT’s Nancy Kanwisher, is investigating the so-called fusiform face area, which becomes active when a person sees a face. Their question is, what if they reverse the pumps? Intentionally activate that area—what would the man see?

You don’t have to be a cyborg to know that you should never trust your lying mind. It conceals from you, for example, the fact that all your perceptions are on a delay. Turning photons into sight, air-pressure fluctuations into sound, aerosolized molecules into smells—that takes however long your imperfect sensory organs need to receive the signals, transduce them into the language of the brain, and pass them on to the shrublike networks of nerve cells that compute the incoming data. The process isn’t instantaneous, but you’re never aware of the zillions of synaptic zaps going on, the electrochemical fizz that makes up your mind. The truth is it’s stagecraft—and you’re both director and audience.

You perceive, or think you perceive, things that aren’t “really there” all the time—that aren’t anywhere except inside your head. That’s what dreams are. That’s what psychedelic drugs do. That’s what happens when you imagine the face of your aunt, the smell of your first car, the taste of a strawberry.

From this perspective, it’s not actually hard to incept a sensory experience—a percept—into someone’s head. I did it to you for the first few paragraphs of this story. I described how the cyborg was dressed, gave you a hint of what the room looked like, told you the soccer ball was orange. You saw it in your mind, or at least some version of it. You heard, in your mind’s ear, the research subject talking to the scientists (although in real life they were speaking Japanese). That’s all fine and literary. But it’d be nice to have a more direct route. The brain is salty glop that turns sensory information into mind; you ought to be able to harness that ability, to build an entire world in there, a simulation indistinguishable from reality.

Kanwisher’s experiment didn’t do that—not by a long shot. But it certainly suggested the possibility, the power, of jacking directly into the brain. When you watch video of the tests, what’s most remarkable is the man’s gentle reaction. He doesn’t appear to feel anything when the scientists hit the juice. The box with eyes doesn’t seem to scare or startle him; in fact, he seems more surprised when it vanishes. The experience may not be real, exactly. (At one point, Kanwisher told me, the volunteer asked, “Am I just imagining things?”) But there’s something real-ish about it. The cycling of electrical impulses into his fusiform gyri hasn’t just shown him a face; it has injected the ineffable feeling of face-ness.

The idea of uploading a synthetic experience into a mind has been a load-bearing member in science fiction for at least 75 years—The Matrix, sure, but also most of Philip K. Dick’s work, cyberspace, the Metaverse, the tape recorder in the 1983 movie Brainstorm, the superconducting quantum interference device in the (underrated) 1995 movie Strange Days. But in real life (that’s what this is, right?), we’re a long way from a data port in the nape of every neck. Neuroscientists can decode the signal coming out of the brain well enough to move a cursor or a robotic arm, though they can’t achieve the fluid elegance of a biological connection. Signal going in is even trickier.2.

Neurosurgeons are pretty good at implanting electrodes. The problem is knowing where, in all that occult neural shrubbery, to put them. A tiny clump of cells might handle some portion of a given task, but the clumps talk to each other, and it’s the formation and reformation of these networks that help power cognition. If you’re trying to trick a mind into perceiving a constructed input as reality, you have to understand what individual neurons do, what big gobbets of lots of neurons do, and how they all relate to each other.

That can get dismayingly specific. Sixteen years ago, Christof Koch, chief scientist at the Allen Institute for Brain Science, helped run a now famous study showing that neurons in a part of the brain called the medial temporal lobe respond to what a wordsmith would identify as nouns—persons, places, or things. One lit up when a person saw pictures of the actress Halle Berry, for example. Another robustly activated for different images of the actress Jennifer Aniston (but not for pictures of her with Brad Pitt). “Neurons are the atoms of perception,” Koch says. “For a Matrix-like technology, you would have to understand the trigger feature of each individual neuron, and there are 50,000 to 100,000 neurons in a piece of brain the size of a grain of rice.” Without that catalog, you might be able to make someone “see flashes of light or motion,” he says, but they’ll “never see Father Christmas.”

Well, flashes of light are a start. You can do a lot with flashes of light. In a lab at the Netherlands Institute for Neuroscience, Pieter Roelfsema and his team have been using them to teach monkeys to read. Not, like, philosophy, but just enough to be able to tell the difference between letters of the alphabet. The researchers do it by stimulating an area called V1, which is part of the visual cortex, a patch of neurons at the back of every primate’s head. When you send current through a V1 electrode, the mammal will see a dot of light floating in space. Switch on the electrode next door, and a second dot will appear next to the first one. These are phosphenes, the phantom lights you see after you hit your head, or the little birdies that fly around Wile E. Coyote after he gets walloped. (The percepts that the Japanese patient had are officially called “facephenes.”)

Put an array of electrodes into V1, Roelfsema says, and “you can work with it like a matrix board. If you have 1,000 electrodes, you basically have 1,000 light bulbs that you can light up in digital space.” The team could stimulate the electrodes in the shape of an A or a B, and the monkeys could indicate they saw the difference.

The signals you see when a brain is doing brain-things aren’t actually thought; they’re the exhaust the brain emits while it’s thinking.

You can imagine how, eventually, a visually impaired person might be able to sort-of see with this technology: Connect an electrode array in V1 to a camera on the outside world, and process the footage into a pointillist picture of reality. It might look like bitmapped Minecraft going in, but brains are very good at adapting to new kinds of sensory data.

Still, to get enough points to make lines and shapes and other useful stimuli, you need lots and lots of electrodes, and the electrodes need to be very precisely targeted. That’s true for any electrode-based approach to sending apprehensible signals into the brain, not just glittery phosphene shapes. Whatever thoughts are, they’re neurally specific. Excite a little too much tissue, Koch says, and “you get chaos.” What’s more, you’ve got to get your timing right. Perception and cognition are like a piano sonata; the notes must sound in a particular order for the harmonies to work. Get that timing wrong and adjacent electrical pings don’t look like shapes—they look like one big smear, or like nothing at all.

Part of what makes the brain’s wheres and whens so hard to parse is that recording neural activity yields data that’s not a huge help if you’re trying to induce neural activity. “There’s a fundamental asymmetry between brain reading and brain writing,” says Jack Gallant, a neuroscientist at UC Berkeley. The signals you see when a brain is doing brain-things aren’t actually thought; they’re the exhaust the brain emits while it’s thinking. Researchers get a little slice of data about the overall state of the brain as a percept crosses the finish line, but sending that data back in wouldn’t re-create the entire race—the successive laps of sensing, perception, recognition, cognition. True, Kanwisher’s team lit up a large face-recognizing area of the brain and got someone to see a face, kind of. That’s the sensibility but not the sense, not a percept of a specific face. Seeing Jennifer Aniston stimulates the Jennifer Aniston neuron; nobody knows whether stimulating the Jennifer Aniston neuron could make someone see Jennifer Aniston.

None of the electrode arrays currently approved for use in humans get anywhere near bridging that gap. They’re bulky and max out at just about 1,000 electrodes, which by the brain’s definition makes them lo-fi. At the moment, researchers are a long way from playing a convincing sonata. “We’re equivalent to banging on the keyboard,” says Daniel Yoshor, a neurosurgeon at the University of Pennsylvania. But the tech will improve, of course. Yoshor and his colleagues have a grant from the Pentagon’s mad-science agency, Darpa, to develop first a 64,000-electrode array, then one with a million electrodes. Neuralink, one of Elon Musk’s companies, is working on thinner, more flexible implants, along with a robot surgeon that can knit them into the brain. The distant future might offer wirelessly networked microchips the size of a grain of sand, or sheets embedded with 100 million electrodes, each one connected to its own processor like the pixels in a television. Maybe not Brahms, but something you can dance to.3.

Shove a billion electrodes in there; you’ll still have problems. Maybe you could make them supple enough to not cause tissue damage if someone shakes their head too hard. Maybe you could figure out surface coatings that slough off the brain’s gunky protective cells, called glia. But remember how brains are really just gobs of gelatinous think-meat suspended in salty water? Well, salty water is highly conductive. Send charge through an electrode in the hopes of stimulating a neuron, and it “extends out to an area beyond the electrode, to a kind of volumetric space with dimensions that are ill-defined,” says John Rogers, a materials scientist at Northwestern University. “You’re probably lighting up more than one neuron.” Each electrode is like a lighthouse on a foggy night: It’s illuminating the rocky shoals, sure, but the light also attenuates and diffracts through the fog. You can’t really keep your zaps contained.

Another technology is on tap, though. It relies on shape-shifting pigment proteins called opsins. We vertebrates have these molecules in the cells of our retinas; when light hits them, they scramble into a new shape, which triggers a cascade of Rube Goldberg reactions inside the cell, which culminates in an electrical impulse that gets sent to the brain. You know, vision. But you don’t need eyes to use opsins. In some algae and microbes, they’re embedded in the cells’ outer surfaces, where they serve as light-activated gateways that move ions in and out. (This is one of the ways a brainless single-celled organism can swim toward the sun.)

That’s incredibly useful, because it’s also how neurons work—conducting ions and the electrical charge they carry. In the mid-2000s, researchers figured out how to genetically transplant those outer-surface opsins into brain cells. This bit of engineering gave neuroscientists the ability to control specific kinds of neurons with different-colored lasers—to turn them on and off with a careful pew-pew! If you were trying to name a cool brain-control technology, you couldn’t really do better than “holographic optogenetics.”

The technique is great for studying what different neurons do. Researchers can genetically implant their ion gates into entire networks of neurons, including many of the brain’s myriad cell types, in a somewhat less damaging, somewhat less physically invasive way than jamming a plug in there. (Flip side is, it’s hard to get the light to penetrate deeply unless you jam a fiber in there.) In some cases, using a different technique, the cells can also be made to fluoresce under a light source, allowing a researcher with a microscope to watch the brain at work.

But optogenetics also works for input. You use bursts of light (from lasers, digital projectors, optic fibers threaded into the brain) to trigger your engineered ion gates. A team of researchers from New York University and Northwestern have bred mice with optogenetic tweaks to their olfactory bulbs—the neurobiological node between a mouse’s exquisitely sensitive nose and its cortex. When the scientists shine the right kinds of light on the olfactory bulb at the right times, the mouse smells (or acts like it’s smelling) what they call a “synthetic odor.”

You could build a full-featured sim that covers every sense, but its ultimate look and feel would always depend on the mind.

What does the smell smell like? “We have no idea,” says Dmitry Rinberg, a neurobiologist at NYU. “Maybe it stinks. Maybe it’s pleasant. Probably it has never experienced this odor in the universe.” There’s no way to know, he says. You can’t ask the mouse.

Unfortunately, that’s the only way to be sure that any percept input system is working. You have to ask the wearer (owner? recipient? Are you still a cyborg if the implant is genetic but also has a laser attached?) what they perceive. Also, they’d still have cables plugged into their heads, even if they were optic fiber instead of electrical wire. And they’d have to volunteer to have their brain genetically engineered.

In people, all this work is much more advanced out at the periphery than in the brain. Cochlear implants, which jack into your auditory nerve rather than your actual brain, provide a pretty good experience for people with impaired hearing, though it isn’t as high-fidelity as a fully functional set of ears. A few scientists are working on the equivalent for the retina. Some prosthetic limbs connect to nerves that can convey a sense of touch. Adding a little bit of vibration to a prosthetic arm can even send the illusion of kinesthesia, a sense of the arm moving in space, so that the user doesn’t have to watch it to know where it is.

But none of that is a full sensorium. It’s not a world. Dancing phosphenes, a cochlear implant input, and a neurophotonically lit olfactory cortex—even if you could fit all that gear into your skull—wouldn’t trick you into thinking you were somewhere else. And it wouldn’t change the fact that each of our brains constructs reality in whatever way it pleases. You could build a full-featured sim that covers every sense, even the tricky ones, but its ultimate look and feel would always depend on the mind.

In “What Is It Like to Be a Bat?,” an often-cited essay from 1974, the philosopher Thomas Nagel argued that every conscious creature’s experiences are individuated, unique to the animal and its brain. The lonely implication is that I can never understand exactly what you are experiencing, any more than I can understand what it feels like to have wings and use echolocation. Even if we were actual cyborgs with plugs in the back of our heads and electrodes and optical fiber in our cortices, ready to receive digital red pills full of glowing green kanji, my brain would interpret all that input differently than your brain would. Sure, we’d tell our machine overlords we were experiencing the same things as each other, because that’s how it’d feel. But the face you see when I tickle your fusiform gyri will never have the same eyes as the one I see when you goose mine. We might eventually live in the same Matrix, but we’d still be in different worlds.

All Rights Reserved for Anna Raben

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.