Machine Interfaces

The first two interests I listed (virtual reality, and exoskeletons) have the same limitation: the interface we use to communicate with it. Currently the most popular way that we interface with a machine is through our fingers. Another way of thinking of this is that we have mapped a set of movements of our muscles to different characters on a keyboard that tells that computer our intention.

It's important to think within the framework of the computer and person being integrated into one machine, this might sound like a scary idea but it's already the case with the phones that we have in our pocket. We are in many ways already a cybernetic organism composed of both biology and technology. The output of our brain interfaces with the input of the machine, and the output of the machine interfaces with our input sensory modalities. A virtual reality headset augments the way the output of the machine interfaces with the input of our biology, however to make this machine as useful as possible we need to augment the way our biological output interfaces with the machine's input. Similarly, with an exoskeleton the limitation lies in their ability to decode the user's intent and augment the movement they were trying to achieve.

In my view there are three methods of creating brain-machine interfaces. The first is to directly sense neurons firing in the brain, and then map the firing of those neurons into something the machine can recognize as input. The second is to leverage the existing interface that the brain has with the body: the spinal cord. This is sort of like the brain's USB port and we can add another node to it for direct input into a machine. Lastly, we can sense muscle movement through EMG and map the firing of muscles directly as input into the machine.

To be honest, I don't know much about how you would do the first two methods. I'm interested in learning more about that. The third method, though, I have an intuition of how you could get to something useful. There are already some start-ups that have achieved some pretty neat demos using EMG. If you create a robust sensor that is somewhat invariant to position, rotation, sweat, or other small interferences that occur between sensor placement and removal, then teaching a nerual network to read sensor data and map it to different intents that a human has seems relativley straightforward. If a neural network sees millions of examples of EMG readings to hand, leg, or arm pose, then we could construct a big dataset and have the neural network learn on this data. Furthermore, if it is combined with IMU data for propioception then we could make full models for recognizing muscle contractions.

I think it's important to consider the implications of being fully interfaced with a machine. We already have a hard time disconnecting from our phones, and bringing us closer to technology could only further bad relationships that we have with technology. Although I think ultimatly technology is a tool to augment our innate abilities, and it mostly lies on the person to use the tool for good. However, it is even more important to not let "bad actors" try to hijack our lives to serve their interests. This already occurs to some extent with media and advertising, and this higher degree of interfacing between machine and human will make it more important to give a higher degree of control to the user wielding these powerful tools. An interface of this sort gives an augmented degree of control of a machine. However, we must ensure that the machine that they interface with does not try to control them back.

Back