Musk’s newest startup is venturing into a series of hard problems

Elon Musk in Idaho in 2015.
Enlarge / Elon Musk in Idaho in 2015.

Tonight, Elon Musk has scheduled an event where he intends to unveil his plans for Neuralink, a startup company he announced back in 2017, then went silent on. If you go to the Neuralink website now, all you’ll find is a vague description of its goal to develop an “ultra-high-bandwidth brain-machine interfaces to connect humans and computers.” These interfaces have been under development for a while, typically under the monicker of brain-computer interfaces, or BCIs. And, while there have been some notable successes in the academic-research world, there’s a notable lack of products on the market.

The slow progress comes, in part, because a successful BCI has to tackle multiple hard problems and, in part, because the regulatory and market conditions are challenging. Ahead of tonight’s announcement, we’ll take a look at all of these and then see how Musk and the people who advise him have decided to tackle them.

A series of problems

An effective BCI means figuring out how to get the nervous system to communicate with digital hardware. Doing so requires solving three problems, which I’ll call reading, coding, and feedback. We’ll go through each of these below.

The first step in a BCI is to figure out what the brain is up to, which requires reading neural activity. While there have been some successes doing this non-invasively using functional MRI, this is generally too blunt an instrument. It doesn’t have the resolution to pick out what small populations of cells are doing and so can only give a very approximate reading of the brain. As a result, we’re forced to go with the alternative: invasive methods, specifically implanting electrodes.

In the past, electrodes were rather large compared to the cells they were tracking and, therefore, ended up taking input from a large population of cells. The larger hardware meant that you couldn’t put many electrodes on a single implant, and larger implants would often come in contact with more than one region of the brain. Making matters worse, electrodes would often cause the development of scar tissue that interfered with our ability to read neural activity.

Our advanced manufacturing abilities have gradually taken care of that. We can now make electrodes out of biocompatible materials that limit scarring. The electrodes are finer and so contact far fewer cells. And, finally, the small size means we can target the region of the brain we’re interested in with more precision.

The challenge, however, can be figuring out what part of the brain we want to target. The features that are visible anatomically are often large and perform multiple functions. While we have a good grasp on the role of the areas that process things like motor control and visual input, there’s much less information about what other areas, like those involved with memory, are actually doing.

Plus brain activity is driven by the interaction of various regions. You might think that something like Parkinson’s disease, characterized by tremors, would originate in the area that controls muscle activity. You’d be wrong, as the diagram in this paper shows: the cells that die in Parkinson’s patients are located in an area that takes part in a complicated communications loop with at least six other areas of the brain.

In some cases, other options exist. Amputees could work with an interface that reads the nerves just prior to the site of amputation. The spinal cord is another potential site where information can be read. These obviously have major advantages when it comes to avoiding the risk and complexity of putting electrodes in the brain. But they’re also relevant to only a subset of the people who could be helped by BCI technology.

Decoding the brain

Once we can listen in on nerves, we have to figure out what they’re saying. Digital systems expect their data to be in an ordered series of voltage changes. Nerves don’t quite work that way. Instead, they send a series of pulses; information is encoded in the frequency, intensity, and duration of these pulse trains, in an extremely analog fashion. While this might seem manageable, there’s no single code for the entire brain. A series of pulses coming from the visual centers will mean something completely different from the pulses sent by the hippocampus while it’s recalling a memory.

So, we have to figure out translations for any brain region we wish to interact with. And, to some extent, even that will be variable, as individual cells within a given region will perform specialized functions, and we can’t tell in advance which cells we’ll end up listening to.

There are potential ways to make this a bit easier. Neural networks are excellent at picking out patterns in noisy data and so may be able to help us avoid having to understand a specific code. Things would also be easier if the process we’re trying to listen in on is something that a conscious patient can control, like limb movements. And again, things should be somewhat easier if we can intervene outside the brain. We have a good idea of which nerves control which limb muscles, for example, and so can potentially read from there.

Feedbacks

One possible aid in all of this is that we don’t necessarily need to get things exactly right. The brain is a remarkably flexible organ, one that can re-learn how to control muscles after having suffered damage from things like a stroke. It’s possible that we only need to get the coding reasonably close, and then the brain will adapt to give the BCI the inputs it needs to accomplish a task.

That, however, requires a degree of feedback; the brain has to be aware of what it’s doing right and what it’s doing poorly if it is going to improve certain activities. Again, the example of limb movement is easy; the subject can just watch what their thoughts are doing as they control a prosthetic or robotic arm. But it’s less true for a lot of other things that we might want to intervene with. Could a person who’s always been blind understand how well they are perceiving visual input?

If we’re controlling motion, there are also other levels of feedback. Let’s say you want to pick up a cup of coffee. Typically, you just have to look at it briefly and can then execute the movements without watching. That’s because our body has a system that keeps track of where all its parts are likely to be (a sense called proprioception). We also might want to know if the cup feels hot when we grasp it and make sure we’re only exerting enough force to hold it, rather than crush it.

A truly effective BCI isn’t going to be a one-way system but rather involve a series of two-way communications between the brain and the hardware it’s working with. And all of these conversations will face the reading and coding issues described above.

Near market-ready?

While all of this may make progress sound like an impossibility, it’s worth remembering that BCIs have accomplished some amazing things, as evidenced by the video below. And implants that provide direct stimulation of the nerves involved in hearing are now commonplace. There’s also been successful demonstrations of retinal implants that stimulate the optic nerve to restore limited sight, lots of promising work with moveable prosthetics, and some use of deep brain stimulation for diseases like Parkinson’s and clinical depression. To an extent, the BCI already exists.

[embedded content]
There’s been some astonishing progress with BCI work.

But it’s also worth looking at the date on that video: 2012. Obviously, there’s a big leap from a promising technology demo to something we could put into broader use. And anything more sophisticated than these simple input or output technologies remains purely in the realm of science fiction. We simply don’t know how much of the works or where different areas are connected to for much beyond the current generation of technology to be feasible. The NIH’s Brain Initiative will ultimately help with some of this, but it’s very much at work in progress at the moment.

In other words, if Musk starts talking about what Neuralink will be doing 15 years out, it should be treated with the skepticism most people treat his original timeline for landing on Mars.

But Neuralink doesn’t just face scientific hurdles; it’s also expected to be a profitable company. And here, it faces an additional set of challenges.

Some of those are regulatory. Something like this would clearly involve FDA approval as an effective medical device. And legal risks of anything that involves brain surgery would also pose a significant danger to a company. There’s also the issue of market size. The number of people who suffer from at least partial paralysis due to spinal injury, stroke, and multiple sclerosis in the United States is substantial. But issues would have to be severe before a brain implant would become a reasonable solution, and not every one will be a good candidate for it. It’s worth watching to see what products are likely to be the first out of the gate from Neuralink, since those will probably help determine if it survives long enough to enter the science fiction territory.

But, outside of the science, this is familiar territory for Musk. The technology for the sorts of things shown in the video is clearly getting closer to being ready for wider use, much like it was for reusable rockets and electric vehicles. There’s also entrenched players in the medical-device market who would be more than happy to steal his lunch money. There’s no way of determining at this point whether Neuralink will end up something closer to SpaceX or to Musk’s attempts to enter the solar market.

https://arstechnica.com/?p=1537347