Fusing human and digital: when can we say Cyborg?


 

The 2014 World Cup opened with a paraplegic kicking the first ball. The moment was almost, but not quite captured in this video. To make the kick, the man wore a cap full of sensors that read his brainwaves. These signals controlled his exoskeleton, allowing him to kick a ball.

The last article in this series was on wearable computing. Some of that technology comes close to integrating human and machine. This article looks at the possibility of actually leaping that gap. Can we make our technology a physical part of ourselves?

The concept of integrating biology with technology has probably been around as long as the robot idea we discussed previously. Edgar Allen Poe wrote a story in 1843 that described a man, once human, who had to be physical reassembled. Though the story is satire, the concept is recognizable as something we would now call a cyborg.

The word itself originated in 1960, by Manfred E. Clynes and Nathan S. Kline. According to Wikipedia, their original concept was:

For the exogenously extended organizational complex functioning as an integrated homeostatic system unconsciously, we propose the term ‘Cyborg’. – Manfred E. Clynes and Nathan S. KlineJean-Luc_Picard_as_Borg

It’s arguable whether the man who kicked the first World Cup ball, Juliano Pinto, is actually a cyborg. The cap he wears to operate his exoskeleton is not actually fused into his body. I would argue that the term is applicable, however. What is being merged is information, brainwaves directly controlling external technology to create bodily movement. But there are other, clearer examples of this evolution.

Using brainwaves to control movement has been on the horizon for awhile now. In 2003, experiments with monkeys showed that they could control robot arms with brain waves. In 2008, a monkey named Idoya, with electrodes planted in her brain, made a humanoid robot walk a treadmill using her mind.

In May of 2012, a paralyzed woman used a robotic arm to raise a cup to her lips and drink. She was able to control the robotic arm using a brain implant.

Then just this month a man named Ian Burkhart open and closed his own fist for the first time in four years. He had been paralyzed in a swimming accident and volunteered to have a chip placed in his brain. At first he practiced moving an animation with his brain. Once he was comfortable with that task, he was fitted with electrodes in his arm. The signal from his brain went into a computer which then stimulated the electrodes according to his mental command.

Moving an arm is one thing. Borderline miraculous, but also an accomplishment many of us probably expected to happy someday. Sight is another thing altogether. The concept seems so far-fetched that Star Trek’s Geordi La Forge couldn’t find a doctor in the 24th century to restore his vision.

Yet here in the early part of the 21st century we’re already getting close. Several researchers have been working on different ways to take signals from an artificial eye and pass them on as vision directly into the brain. As far back as 2002, one researcher was able to do just that.

In 2013 a man was fitted with a device that converted color to sound. He is colorblind, but can interpret colors with this sound input into his brain, via bone conduction.

Even more far-fetched is the idea that we could create an artificial skin—one that would actually provide the sensation of touch. But this, too, is happening. A so-called electric skin is being developed that will pass on sensations from a robot arm or a prosthetic device.

Another approach that some researchers have taken for prosthetics is to use signals from the muscles. Hugh Herr is a double amputee and a researcher at MIT. His team has developed prosthetic limbs that perform actions based on the muscles on the limb. Herr flexes or clenches his leg muscles and the limb interprets that signal to help him walk. This approach does not require the machine-brain interface in order to function.

Another device that doesn’t interface directly with the brain is the Argus II site system. This system is a set of glasses with a camera, tied to a processor. The input occurs through a device that is surgically attached to the retina. This system has been used in Europe since 2011, and was introduced to the United States in 2014.

The future of this technology is getting closer, quickly. We are learning more and more about the neural signals and pathways we use to communicate with our bodies. The more we learn, the more we can imitate, and the more we can integrate ourselves into our machinery. At first, this will be done to restore limbs and senses that people have lost. But down the road, it is likely that people will start to enhance themselves for performance or vanity reasons. Then we can truly talk about cyborgs.

 

 

 

 

More to read:

Article on using DNA for computational biology:

http://www.wired.com/2014/05/the-robots-of-the-future-are-already-here-the-cyborgs-are-coming-next/

Reddit AMA (ask me anything) with Dr. Peckham, developer of muscle stimulating implants:

http://www.reddit.com/r/science/comments/26mdul/science_ama_series_i_am_dr_hunter_peckham/

Vision Quest

http://archive.wired.com/wired/archive/10.09/vision.html?pg=1&topic=&topic_set

Listening to Color

http://www.ted.com/talks/neil_harbisson_i_listen_to_color