Home About

Describe the major components of a generic brain-machine interface for restoring motor function in a paralyzed person.

Feb 12, 2016


The three main components for a generic brain-machine interface (BMI) for restoring motor function to a paralyzed person are:

  • brain
  • machine
  • interface

Of course the details are bit more involved.

Part I: Brain

In most BMI setups, the brain is viewed as a generator of signals: a "black box" that produces outputs for the rest of our apparatuses to interpret.

More sophisticated approaches, as in the automatic anesthesia control methods of Emery Brown, would use modeling of the signal-generation process to reduce noise and improve prediction, but this approach is as yet uncommon in motor BMI work.

Part II: Machine

The "effector" of a truly generic BMI is arbitrary: it can even be another brain, if you're willing to stretch definitions suitably.

Following on years of monkey research, the prototypical effector is a robotic arm, a distant relative of the automotive factory robot. The current state-of-the-art is quite good.

Alternatives include chest-attached manipulators, wheelchairs, powered exoskeletons, and speakers, for producing vocal behavior.

Part III: Interface

The most important component of a BMI is the interface, which connects nature's organic machine to man's artificial machines. It can be helpful to break this interface down into two components: a detector, which takes biological signals, like electrical potentials, and converts them into a human usable digital signal, and a decoder, which interprets that digital signal into a command for the effector.

IIIa: Detector

Any neural activity measurement method can, in principle, be used to detect the signal. A list of possibilities appears below.

Each detection strategy brings with it pros and cons – exchanging breadth of imaging for resolution, or increasing precision at the cost of increased invasiveness.

IIIb: Decoder

Many methods exist for decoding this signal, producing a command signal for the effector. This is almost always approached as a supervised learning task.

In supervised learning, we have a set of inputs \(x\), and a set of desired outputs \(y\). In the case of a neural prosthetic, \(x\) is the output of the detector while \(y\) is the command signal that the subject was attempting to send. We aim to learn a function \(f(x)\) that produces \(y\).

The basic approach is to define a "parameterization" of the function – think of the \(m\) and \(b\) in \(y=mx+b\). We call our set of parameters \(\theta\), and so our function is \(f_{\theta}(x) = \hat{y}\). We twiddle the values of \(\theta\), always aiming to make \(\hat{y}\) closer to \(y\).

There are many, many approaches to supervised learning, as the vagueness and generality of the above paragraph should imply. Modern methods can be wildly successful, even on very difficult tasks.