Rabu, 27 Juni 2018

Sponsored Links

BCI Security â€
src: brl.ee.washington.edu

The computer-brain interface ( BCI ), sometimes called the neural control interface ( NCI ), mind-machine interface ( MMI ), direct neural interface ( DNI ), or brain-machine interface ( BMI ), is a direct line of communication between an enhanced brain or wires and external devices. BCI differs from neuromodulation because it allows a two-way flow of information. BCIs are often directed at researching, mapping, assisting, supplementing, or improving cognitive functioning of humans or sensory motors.

Research on BCI began in 1970 at the University of California, Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA. The paper published after this study also marks the first appearance of the expression of the brain-computer interface in scientific literature.

BCI's field of research and development has since focused primarily on neuroprosthetic applications aimed at recovering hearing, vision and destructive movements. Thanks to the tremendous cortical plasticity of the brain, the signal from the planted prosthesis can, after adaptation, be handled by the brain like a natural sensor or effector channel. After years of experimenting on animals, the first human implanted neuroprosthetic device emerged in the mid-1990s.

Video Brain-computer interface



History

The history of brain-computer interfaces (BCIs) begins with Hans Berger's discovery of the electrical activity of the human brain and the development of electroencephalography (EEG). In 1924 Berger was the first person to record human brain activity using EEG. Berger is able to identify oscillatory activity, such as Berger waves or alpha waves (8-13 Hz), by analyzing EEG traces.

Berger's first recorder is very simple. He put a silver wire under the scalp of his patient. This is then replaced with a silver foil attached to the patient's head with a rubber bandage. Berger connects these sensors to Lippmann's capillary electrometers, with disappointing results. However, more sophisticated measuring instruments, such as Siemens double waveguide galvanometer recorder, which features electric voltages as small as ten thousandths of a volt, bring success.

Berger analyzed the alternations in his EEG wave diagram with brain disease. The EEG allows an entirely new possibility to research human brain activity.

UCLA Professor Jacques Vidal coined the term "BCI" and produced the first peer-reviewed publication on this topic. Vidal is widely recognized as the inventor of BCIs in the BCI community, as reflected in many peer-review articles reviewing and discussing the field (eg,). His 1973 paper states "BCI challenge": Control of objects using EEG signals. In particular he shows the Variable Negative Contour (NV) potential as a challenge to BCI control. The 1977 experiment Vidal described was BCI's first application after its 1973 BCI challenge. It is a non-invasive EEG (actually Visual Evoked Potentials (VEP)) that controls a graphical object like a cursor on a computer screen. The demonstration was a movement in the labyrinth.

After initial contributions, Vidal was not active in BCI research, or BCI events such as conferences, over the years. But in 2011, he gave a lecture in Graz, Austria, supported by the Future BNCI project, presenting the first BCI, which received a standing ovation. Vidal joins his wife, Laryce Vidal, who previously worked with him at UCLA on his first BCI project.

In 1988 a report was given on a non-invasive EEG control of a physical object, a robot. The experiment described is the EEG control of some start-stop-restart of the robot movement, along a random path defined by the line drawn on the floor. Behavior of the line is the standard robot behavior, utilizing autonomous intelligence and autonomous energy sources.

In 1990 the report was given to a two-way adaptive BCI that controlled the computer buzzer by anticipative brain potential, the potential for Negative Variation Contrast (CNV). This experiment illustrates how the state of hope of the brain, manifested by CNV, controls in the buzzer S2 feedback loop in the S1-S2-CNV paradigm. The cognitive waves obtained represent learning expectations in the brain named Electroexpectogram (EXG). CNV's brain potential is part of BCI's challenge presented by Vidal in his 1973 paper.

By 2015, BCI Society is officially launched. This nonprofit organization is managed by international board of BCI experts from various sectors (academia, industry, and medicine) with experience in various types of BCI, such as invasive/non-invasive and control/non-control. The Council is elected by members of the Union, which has several hundred members. Among other responsibilities, BCI Society organizes the International BCI Meetings. These major conferences are held annually and include activities such as key lectures, workshops, posters, satellite events and demonstrations. The next meeting is scheduled for May 2018 at the Asilomar Conference Grounds in Pacific Grove, California.

Maps Brain-computer interface



Versus neuroprosthetics

Neuroprosthetics is an area of ​​neuroscience associated with neural prostheses, that is, using artificial devices to replace the functioning of nervous system disorders and related problems of the brain, or sensory organs. The most widely used neuroprosthetic device is cochlear implant which, in December 2010, has been implanted in about 220,000 people worldwide. There are also some neuroprosthetic devices aimed at restoring vision, including retinal implants.

The difference between BCI and neuroprosthetics is largely in how terms are used: neuroprosthetics usually connect the nervous system to the device, whereas BCI usually connects the brain (or the nervous system) to the computer system. Neuroprostetics can practically be associated with any part of the nervous system - for example, peripheral nerves - while the term "BCI" usually refers to a more narrower class of systems associated with the central nervous system.

The term sometimes, however, is used interchangeably. Neuroprosthetics and BCI strive to achieve the same goals, such as restoring vision, hearing, movement, ability to communicate, and even cognitive function. Both use similar experimental methods and surgical techniques.

research paper on Brain Computer Interface devices I
src: cache.pakistantoday.com.pk


Animal BCI Research

Some laboratories have managed to record signals from the cerebral cortex of monkeys and mice to operate BCIs to produce movement. Monkeys have navigated the computer cursor on the screen and ordered the robot arm to perform simple tasks just by thinking about the task and looking at visual feedback, but without motor output. In May 2008, photographs showing a monkey at the University of Pittsburgh Medical Center operate a robotic arm with thoughts published in a number of distinguished journals and science magazines. Other studies on cats have broken their visual neural signals.

Initial work

In 1969 the study of operant conditioning from Fetz and colleagues, at the Regional Primate Research Center and Department of Physiology and Biophysics, University of Washington School of Medicine in Seattle, showed for the first time that monkeys can learn to control deflection from biofeedback meters. arming with neural activity. Similar work in the 1970s determined that monkeys could quickly learn to voluntarily control the rate of firing of individual and multiple neurons in the primary motor cortex if they were rewarded for producing appropriate neural activity patterns.

The study developed an algorithm to reconstruct the movement of motor cortical neurons, which control movement, dating from the 1970s. In the 1980s, Apostolos Georgopoulos at Johns Hopkins University found a mathematical relationship between the electrical responses of single motor neurons to rhesus monkey monkeys and the direction in which they move their arms (based on cosine functions). He also found that dispersed groups of neurons, in different areas of the monkey's brain, controlled collective motor orders, but were able to record the firing of neurons in only one area at a time, due to technical limitations imposed by the equipment.

There has been a rapid development in BCI since the mid-1990s. Some groups have been able to capture the cortex signals of complex brain motors by recording from neural ensembles (neuron groups) and using this to control external devices.

Prominent research success

Kennedy and Yang Dan

Phillip Kennedy (who later founded Neural Signals in 1987) and his colleagues built the first intracortical brain-computer interface by implanting a cone-neurotrophic electrode to a monkey.

In 1999, researchers led by Yang Dan at the University of California, Berkeley decoded a neural strike to reproduce images seen by cats. The team used a series of electrodes embedded in the thalamus (which integrates all sensory brain inputs) from the sharp-eyed cat. Researchers target 177 brain cells in the lateral thalamus geniculate nucleus, which translates signals from the retina. The cats were shown eight short films, and the dismissal of their neurons was recorded. Using a mathematical filter, the researchers translated the signal to produce a film about what a cat sees and is able to reconstruct a recognizable scene and a moving object. The same results in humans have been achieved by researchers in Japan (see below).

Nicolelis

Miguel Nicolelis, a professor at Duke University, in Durham, North Carolina, has been a leading proponent of using multiple electrodes scattered over a wider area of ​​the brain to obtain neuronal signals to drive BCI.

After conducting preliminary research on mice during the 1990s, Nicolelis and his colleagues developed a BCI that outlines the brain activity in owl monkeys and uses a device to reproduce monkey movements in the robotic arm. Monkeys have excellent grasping and grasping capabilities and good hand manipulation skills, making them the ideal test subject for this type of work.

In 2000 the group managed to build a BCI that reproduces owl monkey movements while monkeys operate joysticks or grab food. BCI is operated in real time and can also control the robot separately remotely via Internet protocol. But monkeys can not see the arm move and do not receive any feedback, called the open loop BCI.

Later experiments by Nicolelis using rhesus monkeys managed to close feedback and reproduce the monkeys reach and grasp the motion in the robotic arm. With their deep and wrinkled brains, rhesus monkeys are considered a better model for human neurophysiology than owl monkeys. Monkeys are trained to reach out and capture objects on a computer screen by manipulating the joystick while the motion associated by the robotic arm is hidden. The monkey is then shown the robot directly and learns to control it by looking at his movements. BCI uses speed prediction to control the movements that reach and simultaneously predict the strength of handgripping. In 2011 O'Doherty and colleagues showed BCI with sensory feedback with rhesus monkeys. The monkey's brain controls the position of the avatar arm while receiving sensory feedback through direct intracortical stimulation (ICMS) in the representation area of ​​the sensory cortex arm.

Donoghue, Schwartz and Andersen

Other laboratories have developed BCI and algorithms that break neuronal signals including those run by John Donoghue at Brown University, Andrew Schwartz at the University of Pittsburgh and Richard Andersen at Caltech. These researchers have been able to produce a working BCI, even using signals recorded from far fewer neurons than Nicolelis (15-30 neurons versus 50-200 neurons).

The Donoghue group reported training rhesus monkeys to use BCI to track visual targets on a computer screen (closed-loop BCI) with or without joystick help. The Schwartz group created the BCI for three-dimensional tracking in virtual reality and also reproduced BCI controls in the robot arm. The same group also made headlines when they pointed out that a monkey can feed its own pieces of fruit and marshmallows using a robotic arm controlled by the animal's own brain signal.

The Andersen group used early activity recordings of the posterior parietal cortex in their BCI, including signals made when experimental animals were anticipated to receive the prize.

More research

In addition to predicting kinematic and kinetic parameters of limb movements, BCI predicting electromyographic or electrical activity of primate muscles is being developed. Such BCIs can be used to restore mobility in legs that are paralyzed by muscles that stimulate electricity.

Miguel Nicolelis and colleagues point out that large nerve enlargement activities can predict arm positions. This work enables the creation of BCI which reads the intention of arm movement and translates it into artificial actuator movements. Carmena and colleagues programmed a nerve encoding in BCI that allowed a monkey to control the movement of grabs and grasp the robot's hand. Lebedev and colleagues argue that brain tissue reorganizes to create a new representation of robotic frills in addition to the representation of the limbs themselves.

The biggest obstacle to BCI technology today is the lack of sensor modalities that provide secure, accurate and powerful access to brain signals. It is conceivable or even possible, however, that such sensors will be developed in the next twenty years. The use of such sensors should greatly extend the range of communication functions that can be provided using BCI.

The development and implementation of the BCI system is complex and time-consuming. In response to this problem, Gerwin Schalk has developed a general-purpose system for BCI research, called BCI2000. BCI2000 has been developed since 2000 in a project led by the Brain-Computer Interface R & D Program at the Wadsworth Center of the New York State Department of Health in Albany, New York, USA.

The new 'wireless' approach uses channels of light ions such as Channelrhodopsin to control the activity of a genetically determined set of in vivo neurons. In the context of a simple learning task, the illumination of transfected cells in the somatosensory cortex affects the decision-making process of free-moving rats.

The use of BMI has also led to a deeper understanding of the neural network and central nervous system. Research has shown that despite the tendency of neuroscientists to believe that neurons have the greatest influence when working together, a single neuron can be conditioned through the use of BMI to shoot at a pattern that allows primates to control motor output. The use of BMI has led to the development of a single neuron insufficiency principle which states that even with a single well-tuned neuron ignition level can only carry a narrow amount of information and therefore the highest accuracy rate is achieved by recording the appearance of a collective ensemble.. Other principles found with the use of BMI include the principle of neuronal multitasking, the principle of neuronal mass, the principle of neural degeneration, and the principle of plasticity.

BCI is also proposed to be applied by flawless users. The categorization centered on users of the BCI approach by Thorsten O. Zander and Christian Kothe introduced the term passive BCI. Furthermore for active and reactive BCIs used for direct control, passive BCI allows to assess and interpret changes in user circumstances during Human-Computer Interaction (HCI). In a secondary and implicit control loop, the computer system adapts to its users which increases its usefulness in general.

BCI Award

The Annual BCI Research Award is given in recognition of outstanding and innovative research in the field of Computer-Brain Interface. Every year, a well-known research laboratory is asked to rate the proposed project. The jury consists of the world's leading BCI experts recruited by award-winning laboratories. The jury selects twelve nominees, then selects the first, second, and third winners, each receiving a $ 3,000, $ 2,000, and $ 1,000 award. The following list presents the first place winners of the BCI Annual Research Award:

  • 2010: Cuntai Guan, Kai Keng Ang, Karen Sui Geok Chua and Beng Ti Ang, (A * STAR, Singapore)
Rehabilitation of the Brain-Computer robot The motor-based interface for stroke.
  • 2011: Moritz Grosse-Wentrup and Bernhard SchÃÆ'¶lkopf, (Max Planck Institute for Intelligent Systems, Germany)
What are the major causes of neuro-physiological performance variation in the brain-computer interface?
  • 2012: Surjo R. Soekadar and Niels Birbaumer, (Applied Neurotechnology Laboratory, TÃÆ'¼bingen University Hospital and Institute of Medical Psychology and Behavioral Neurobiology, Eberhard Karls University, TÃÆ'¼bingen, Germany)
Improve Ipsilesional Brain-Computer Interface Training Efficiency in Neurorehabilitation of Chronic Stroke
  • 2013: MC Dadarlat a, b , JE O'Doherty a , PN Sabes a, b ( a Department of Physiology, Center for Integrative Neuroscience, San Francisco, CA, USA, b UC Berkeley-UCSF Graduate Program in Biological Engineering, University of California, San Francisco, CA, USA)
A learning-based approach to artificial sensory feedback: Intracortical microstimation replaces and adds vision
2014: Katsuhiko Hamada, Hiromu Mori, Hiroyuki Shinoda, Tomasz M. Rutkowski, (University of Tokyo, JP, TARA Life Science Center, Tsukuba University, JP, RIKEN Brain Science Institute, JP)
The BCI Airbrush Tactile Ultrasonic Display
2015: Guy Hotson, David P McMullen, Matthew S. Fifer, Matthew S. Johannes, Kapil D. Katyal, Matthew P. Para, Robert Armiger, William S. Anderson, Nitish V. Thakor, Brock A. Wester, Nathan E. Crone (Johns Hopkins University, USA)
Individual Finger Control Modular Prosthetic Movement using High-Density Electrocorticography in Human Subject
  • 2016: Gaurav Sharma, Nick Annetta, Dave Friedenberg, Marcie Bockbrader, Ammar Shaikhouni, W. Mysiw, Chad Bouton, Ali Rezai (Battelle Memorial Institute, Ohio State University, USA) >
BCI Implants for Real-Time Cortical Cortical Controls Functional Wrist and Finger Movement in Humans with Quadriplegia
2017: S. Aliakbaryhosseinabadi, EN Kamavuako, N. Jiang, D. Farina, N. Mrachacz-Kersting (Center for Sensory-Motor Interaction, Department of Health and Technology, Aalborg University, Aalborg, Denmark, Department of Systems Design Engineering, Engineering Faculty, Waterloo University, Waterloo, Canada; and Imperial College London, London, UK)
Online adaptive brain-computer interface with attention variation

Brain-Computer-Interfaces…Controlling With Your Thoughts ...
src: transhumanistlibrarian.files.wordpress.com


BCI Human Research

Invasive BCI

Vision

Invasive BCI research has targeted damaged vision and provides new functionality for paralyzed patients. Invasive BCI is planted directly into the gray matter of the brain during neurosurgery. Because they are located on gray matter, invasive devices produce the highest quality signal from BCI devices but are vulnerable to scarring, causing the signal to be weaker, or even absent, as the body reacts to foreign matter. in the brain.

In the science of vision , direct brain implants have been used to treat non-congenital blindness (acquired). One of the first scientists to produce a brain interface that serves to restore vision is the private researcher William Dobelle.

Dobelle's first prototype was implanted into "Jerry", a blinded man in adulthood, in 1978. A single BCI array containing 68 electrodes was implanted into Jerry's visual cortex and succeeded in producing phosphene, the sensation of seeing light. This system includes a camera mounted on the glasses to send a signal to the implant. Initially, the implant allows Jerry to see gray in the field of limited vision with low frame rate. It also requires him to connect to a mainframe computer, but the shrinking of electronics and faster computers makes his artificial eye more portable and now allows him to perform simple tasks without help.

In 2002, Jens Naumann, also blinded in adulthood, became the first in a series of 16 patients who paid to receive Dobelle's second-generation implant, marking one of BCI's earliest commercial uses. Second generation devices using more sophisticated implants allow for better mapping of phosphenes into coherent vision. Phosphos are scattered throughout the visual field in what researchers call the "night-star effect". Immediately after the implant, Jens was able to use his imperfect vision to drive slowly around the parking lot of the research institute. Unfortunately, Dobelle died in 2004 before the process and its development was documented. Furthermore, when Mr. Naumann and the other patients in the program began to experience problems with their vision, there was no relief and they eventually lost their "sight" again. Naumann writes about his experience with Dobelle's work on Search for Paradise and his return to his ranch in South East Ontario, Canada, to continue his normal activities.

Movement

BCI focusing on motor neuroprosthetics aims to restore movement to individuals with paralysis or provide devices to assist them, such as interfaces with computers or robotic arms.

Researchers at Emory University in Atlanta, led by Philip Kennedy and Roy Bakay, are the first to install human brain implants that produce high enough quality signals to simulate motion. Their patient, Johnny Ray (1944-2002), suffered from locked-in syndrome after suffering a brainstem stroke in 1997. Implant Ray was installed in 1998 and he lived long enough to start working with implants, eventually learning to control computer cursors; he died in 2002 from an aneurysm brain.

Tetraplegic Matt Nagle became the first man to control artificial hands using BCI in 2005 as part of the first nine-month human experiment of BrainGate chip-implant Cyberkinetics. Buried in the right precentral gyrus (motor cortex area for arm movement) Nagle, BrainGate 96-electrode implant allows Nagle to control the robotic arm by thinking about moving his hands as well as computer cursors, lights and TV. One year later, professor Jonathan Wolpaw received a gift from the Altran Foundation for Innovation to develop a Brain Computer Interface with an electrode located on the surface of the skull, not directly in the brain.

Recently, a team of researchers led by the Braingate group at Brown University and a group led by the University of Pittsburgh Medical Center, both in collaboration with the United States Department of Veterans Affairs, have shown further success in direct control of robotic prosthetic limbs with many level. the freedom of using a direct connection to the array of neurons in the patient's motor cortex with tetraplegia.

partially invasive BCI

A partially invasive BCI device is planted inside the skull but rests outside the brain rather than inside the gray matter. They produce better resolution signals than non-invasive BCI in which the shell bone tissue is contagious and alters the signal and has a lower risk of forming scar tissue in the brain than the fully invasive BCI. There is an intracortical BCI preclinical demonstration of the perilesional cortex of stroke.

Electrocorticography (ECoG) measures electrical brain activity taken from beneath the skull in a manner similar to non-invasive electroencephalography, but the electrode is embedded in a thin plastic pad placed above the cortex, below the dura mater. ECoG technology was first tested in humans in 2004 by Eric Leuthardt and Daniel Moran of the University of Washington in St. Louis. In later experiments, the researchers allowed a teenage boy to play Space Invaders using his ECoG implants. This study shows that control is rapid, requires minimal training, and can be an ideal tradeoff with regard to signal allegiance and invasion rate.

(Note: this electrode has not been implanted in patients with the aim of developing BCI.Patients have suffered severe epilepsy and electrodes while implanted to help doctors locally focus the seizures BCI researchers only take advantage of this.)

Signals can be subdural or epidural, but not taken from within the brain parenchyme itself. It has not been extensively studied to date because of the limited access of subjects. Currently, the only way to obtain a signal for study is through the use of patients requiring invasive monitoring for localization and epileptogenic focus resection.

ECoG is a very promising medium BCI modality because it has higher spatial resolution, better signal-to-noise ratios, wider frequency range, and fewer training needs than EEG recorded on the scalp, and at times similarly have lower technical difficulties, lower clinical risk. , and possibly superior long-term stability rather than single intracortical neuron recording. The profile of this feature and the latest evidence of a high level of control with minimal training requirements demonstrates the potential for real-world applications for people with motor disabilities.

The BCI rearranging device of light is still within the realm of theory. This will involve embedding the laser inside the skull. The laser will be trained on one neuron and the reflectance of the neuron is measured by a separate sensor. When the neuron is on, the laser beam pattern and its wavelength will change slightly. This will allow researchers to monitor single neurons but require less contact with the tissues and reduce the risk of scarring buildup.

Non-invasive BCI

There are also experiments on humans that use non-invasive neuroimaging technology as the interface. Most of BCI's published work involves non-invasive non-invasive EEG BCI. Non-invasive EEG technologies and interfaces have been used for a wider range of applications. Although EEG-based interfaces are easy to use and require no surgery, they have relatively poor spatial resolution and can not effectively use high-frequency signals because the skulls muffle signals, diffuse and obscure electromagnetic waves created by neurons. The EEG-based interface also requires some time and effort before each usage session, while non-EEG-based, as well as invasive-based interfaces do not require prior use training. Overall, the best BCI for each user depends on many factors.

Non-EEG human-computer interface

Oscillation pupil size

In article 2016, entirely new communication devices and non-EEG human-computer interfaces were developed, requiring no visual fixation or the ability to move the eye altogether, based on the secret interest (ie without fixing the eyes) selected. letters on a virtual keyboard with each letter having its own micro oscillating circle in brightness in different time transitions, where letter selection is based on the best fit between, on the one hand, the unintentional pattern of student oscillation, and, on the other hand, this circle-in-backlight brightness oscillation pattern. Accuracy is also enhanced by the mental users who practice the words 'light' and 'dark' in harmony with the transition of the brightness of the circle/letter.

Functional near-infrared spectroscopy

In 2014 and 2017, BCI uses near-infrared functional spectroscopy for "locked" patients with amyotrophic lateral sclerosis (ALS) able to restore some of the patient's basic ability to communicate with others.

Computer-based electroencephalography (EEG) interface

Overview

Electroencephalography (EEG) is the most studied non-invasive interface, mainly because of good temporal resolution, ease of use, portability and low set-up cost. This technology is somewhat susceptible to noise.

In the early days of the BCI study, another major barrier to using the EEG as a brain-computer interface was the extensive training required before users could work the technology. For example, in an experiment that began in the mid-1990s, Niels Birbaumer at the University of TÃÆ'¼bingen in Germany trained heavily paralyzed people to self-manage their slow cortical potential in their EEGs in such a way that these signals could be used as a binary signal to control the computer cursor. (Birbaumer previously trained epilepsy to prevent future conformity by controlling these low voltage waves.) This experiment saw ten patients trained to move the computer cursor by controlling their brain waves. The process is slow, requiring more than an hour for the patient to write 100 characters with the cursor, while the training often takes months. However, the slow cortical potential approach to BCI has not been used in a few years, as other approaches require little or no training, faster and more accurate, and work for most users.

Another research parameter is the type of oscillation activity that is measured. The latest research by Birbaumer with Jonathan Wolpaw at New York State University has focused on the development of technologies that will allow users to select the brain signals they find most convenient to operate BCI, including mu and beta rhythm.

The next parameter is the feedback method used and this is shown in the P300 signal study. The P300 wave pattern is generated involuntarily (feedback-stimulus) when people see something they recognize and allow BCI to decode the mind category without training the patient first. In contrast, the biofeedback method described above requires learning to control brain waves so that the resulting brain activity can be detected.

While the EEG-based brain-computer interface has been pursued extensively by a number of research laboratories, the latest advances made by Bin He and his team at the University of Minnesota show the potential of an EEG-based brain-computer interface to accomplish tasks that are close to invasive. brain-computer interface. Using advanced functional neuroimaging including functional MRI BOLD and EEG source imaging, Bin He and co-workers identified co-variation and co-localization of electrophysiological and haemodynamic signals induced by motorized imagination. Corrected by a neuroimaging approach and with training protocols, Bin He and colleagues demonstrated non-invasive EEG-based brain-computer interface capabilities to control flight from virtual helicopters in 3-dimensional space, based on motorized imagination. In June 2013 it was announced that Bin He has developed a technique to allow remote control helicopters to be guided through obstacles.

In addition to brain-computer interfaces based on brain waves, as recorded from the EEG scalp electrodes, Bin He and co-workers explored the EEG-based virtual brain-based computer interface by first solving the EEG inverse problem and then using the resulting virtual. EEG for brain-computer interface tasks. Well-controlled studies show the benefits of source analysis based on brain-computer interfaces.

A 2014 study found that patients with severe motor disorders can communicate faster and more reliably with non-invasive BCI EEG, compared to muscle-based communication channels.

Active active electrode reference

In the early 1990s Babak Taheri, at the University of California, Davis demonstrated the first single active electrode array and also the multichannel dry using micro-machining. Construction and results of single-channel dry EEG electrode was published in 1994. The composite electrode also performed well compared to the silver/silver chloride electrode. The device consists of four sensor sites with integrated electronics to reduce noise by impedance matching. The advantages of the electrode are: (1) no electrolyte is used, (2) no skin preparation, (3) significantly reduces sensor size, and (4) compatibility with EEG monitoring system. The active electrode array is an integrated system made of an array of capacitive sensors with a local integrated circuit that is placed in a package with the battery to power the circuit. This level of integration is required to achieve the functional performance obtained by the electrode.

The electrodes are tested on electric test bench and in human subjects in four modalities of EEG activity, ie: (1) spontaneous EEG, (2) potential related sensory events, (3) brainstem potential, and (4) potential cognitive events. The performance of dry electrodes is better than standard wet electrodes in terms of skin preparation, no gel requirements (dry), and a higher signal-to-noise ratio.

In 1999 researchers at Case Western Reserve University, Cleveland, Ohio, led by Hunter Peckham, used 64-electrode EEG skullcap to restore limited hand gestures to crippled Jim Jatich. When Jatich concentrated on simple but contradictory concepts such as ups and downs, his EEG beta-rhythm output was analyzed using software to identify patterns in noise. The base pattern is identified and used to control a switch: Activity above average is set at, below average. As well as allowing Jatich to control the computer's cursor the signal is also used to push the nerve controller embedded in his hand, restoring some movement.

Prosthesis and environmental control

Non-invasive BCI has also been applied to allow brain control of the upper and lower extremity prosthetic devices in people with paralysis. For example, Gert Pfurtscheller of Graz University of Technology and colleagues demonstrated a BCI-controlled functional electric stimulation system to restore upper extremity motion in a person with tetraplegia due to a spinal cord injury. Between 2012 and 2013, researchers at the University of California, Irvine demonstrated for the first time that it is possible to use BCI technology to restore controlled brain running after spinal cord injury. In a research study of spinal cord injury, a person with paraplegia is able to operate orthotic gait orthotics to regain brain-controlled ambulation. In 2009 Alex Blainey, an independent researcher based in the UK, managed to use Emotiv EPOC to control a 5-axis robotic arm. He then went on to make some mind controlled wheelchair and home automation demonstrations that could be operated by people with limited or no motor control such as those with paraplegia and cerebral palsy.

The study of military use of BCIs funded by DARPA has been ongoing since the 1970s. The focus of current research is user-to-user communication through neural signal analysis.

DIY and open source BCI

In 2001, the OpenEEG Project was initiated by a group of DIY neuroscientists and engineers. ModularEEG is the primary tool that creates the OpenEEG community; it is a 6-channel signal capture board that costs between $ 200 and $ 400 to make at home. The OpenEEG project marks an important moment in the emergence of the DIY brain-computer interface.

In 2010, Frontier Nerds of the NYP ITP program publishes a thorough tutorial entitled How to Hack EEG Toys. The tutorial, which drives the minds of many novice DIY BCI fans, shows how to create a single channel EEG at home with Arduino and Mattel Mindflex at very reasonable prices. This tutorial strengthens BCI DIY movement.

In 2013, OpenBCI emerges from DARPA's invitation and the next Kickstarter campaign. They made a high-quality, open-source EEG 8-channel acquisition board known as the 32-bit Board, which sold for less than $ 500. Two years later they created the first 3D EEG Headset, known as Ultracortex, as well as, the EEG acquisition board The 4-channel, known as the Ganglion Board, sells for less than $ 100.

By 2015, NeuroTechX is created with the mission of building an international network for neurotechnology.

MEG and MRI

Magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) have been successfully used as non-invasive BCI. In a widely reported experiment, fMRI allows two users to be scanned to play Pong in real-time by altering the hemodynamic response or cerebral blood flow through biofeedback techniques.

The fMRI measurement of the hemodynamic response in real time has also been used to control the robot arm with a seven-second delay between mind and movement.

In 2008, a study developed at the Computational Computational Science Research Laboratory (ATR) in Kyoto, Japan, allowed scientists to reconstruct images directly from the brain and display them on a black and white computer at a resolution of 10x10 pixels. The article announcing this achievement is the cover story of the December 10, 2008, Neuron journal.

In 2011, researchers from UC Berkeley published a study reporting second reconstruction of a video watched by a research subject, from fMRI data. This is achieved by creating statistical models related to the visual patterns in the video shown to the subject, to the brain activity caused by watching the video. The model is then used to search 100 video segments one second, in a database of 18 million seconds of random YouTube video, whose visual pattern best fits the recorded brain activity when the subject is watching a new video. 100 one-second video extract is then combined into a mashed-up image that resembles the video being watched.

BCI control strategy in neurogaming

Motor image

Motor image involves the imagination of the movement of various parts of the body resulting in sensorimotor cortex activation, which modulates the sensorimotor oscillations in the EEG. This can be detected by BCI to infer user intent. Motor images usually require a number of training sessions before acceptable control of BCI is obtained. This training session can take several hours over several days before the user can consistently use techniques with acceptable levels of precision. Regardless of the duration of the training session, the user can not master the control scheme. This results in a very slow game speed. Advanced machine learning methods have recently been developed to compute subject-specific models to detect motor image performance. The top performance algorithm of BCI Competition IV dataset 2 for motor images is the Common Spatial Pattern Bank Filter, developed by Ang et al. from A * STAR, Singapore).

Bio/neurofeedback for passive BCI design

Biofeedback is used to monitor the subject's mental relaxation. In some cases, biofeedback does not monitor electroencephalography (EEG), but body parameters such as electromyography (EMG), galvanic skin resistance (GSR), and heart rate variability (HRV). Many biofeedback systems are used to treat certain disorders such as attention deficit hyperactivity disorder (ADHD), sleep problems in children, dental milling, and chronic pain. The EEG biofeedback system usually monitors four different bands (theta: 4-7Ã, Hz, alpha: 8-12Ã, Hz, SMR: 12-15Ã, Hz, beta: 15-18Ã, Hz) and challenges the subject to control it. BCI Passive involves using BCI to enrich human-machine interaction with implicit information on actual user circumstances, for example, a simulation to detect when a user intends to push the brakes during an emergency car stop procedure. Game developers who use passive BCI must recognize that through game level repetition, the cognitive status of the user will change or adapt. In the first game of a level, the user will react to different things from during the second game: for example, the user will be less surprised at an event in the game if he expects it.

Visible visual potential (VEP)

VEP is the electricity potential recorded after the subject is presented with a visual stimulus type. There are several types of VEP.

Steady-state visual evoked potentials (SSVEPs) use the potential produced by pulling the retina, using visual stimuli that are modulated at a particular frequency. SSVEP stimuli are often shaped from an alternating pattern of boxes and sometimes only use blinking images. The frequency of inversion of the stimulus phase used can be clearly distinguished in the EEG spectrum; this makes SSVEP stimulus detection relatively easy. SSVEP has proven successful in many BCI systems. This is due to several factors, the signal generated can be measured in a population of temporary VEP and blinking motion and the electrocardiographic artifact does not affect the frequency being monitored. In addition, SSVEP signals are very strong; the topographical organization of the primary visual cortex is such that the larger area obtains afferents from the central or fovial regions of the visual plane. SSVEP does have some problems. Because SSVEP uses a flashing stimulus to infer user intent, the user should stare at one of the flashing or recurring symbols to interact with the system. Therefore, it is likely that the symbols can become irritating and uncomfortable to use during the play session again, which can often last more than an hour which may not be the ideal game.

Another type of VEP used with the app is the potential P300. Potential related events P300 is a positive peak in the EEG that occurs at about 300 ms after the emergence of a target stimulus (stimulus to which the user is waiting or searching) or eccentric stimulation. The amplitude of P300 decreases as target stimuli and ignored stimuli grow more alike. P300 is considered to be associated with a process of higher level concern or response oriented Using P300 as a control scheme has the advantage of participants who only have to attend limited training sessions. The first application to use the P300 model is the P300 matrix. In this system, the subject will select letters from lattice 6 to 6 letters and numbers. Grid lines and columns pass in sequence and each time a selected "letter of choice" lights up, the user's P300 (potentially) appears. However, the communication process, around 17 characters per minute, is pretty slow. P300 is BCI that offers discrete options rather than continuous control mechanisms. The advantage of using P300 in games is that players do not have to teach themselves how to use a completely new control system and just have to do a short training, to learn game mechanics and basic usage of the BCI paradigm.

Synthetic/silent telepathic

In a $ 6.3 million Army initiative to create devices for telepathic communications, Gerwin Schalk, secured in a $ 2.2 million grant, found the use of ECoG signals can distinguish vowels and consonants embedded in spoken and imagined words, highlighting mechanisms which are different in relation to vowel and consonant production, and can provide a basis for brain-based communication using imagined utterances.

In 2002, Kevin Warwick had a series of 100 electrodes fired at his nervous system to connect his nervous system to the Internet to investigate the possibility of improvement. With this in place Warwick successfully conducted a series of experiments. With electrodes also implanted into his wife's nervous system, they conducted the first direct electronic communication experiments between the two human nervous systems.

Research on synthetic telepathy using subvocalization takes place at the University of California, Irvine under the leadership of scientist Mike D'Zmura. The first communication occurred in 1960 using EEG to generate Morse code using alpha brain waves. Using EEG to communicate the conceived speech is less accurate than the invasive method of placing electrodes between the skull and brain. On February 27, 2013 a group with Miguel Nicolelis at Duke University and IINN-ELS managed to connect the brains of two mice with an electronic interface that allowed them to share information directly, in the first direct brain-to-brain interface.

On September 3, 2014, direct communication between the human brain becomes possible over long distances through the transmission of EEG Internet signals.

In March and May 2014, a study conducted by Dipartimento at Psicologia Generale - UniversitÃÆ' di Padova, EVANLAB - Firenze, LiquidWeb s.r.l. company and Dipartimento at Ingegneria e Architettura - UniversitÃÆ' in Trieste, showed confirmation results analyzing the EEG activity of two human partners spatially separated about 190Ã, apart when one member of the couple received stimulation and the second connected only mentally with the first.

Combining computer vision and brain computer interface for faster ...
src: 3c1703fe8d.site.internapcdn.net


BCI cell-culture

Researchers have built devices to interact with nerve cells and entire neural networks in cultures outside of animals. As well as continuing research on animal implant devices, experiments on neural network culture have focused on building problem-solving networks, building basic computers and manipulating robotic devices. Research into techniques for stimulating and recording of individual neurons grown on semiconductor chips is sometimes referred to as neuroelectronics or neurochips.

The development of the first working neurochip was claimed by the Caltech team led by Jerome Pine and Michael Maher in 1997. The Caltech chip has room for 16 neurons.

In 2003 a team led by Theodore Berger, at the University of Southern California, began work on neurochips designed to function as an artificial or prosthetic hippocampus. Neurochip is designed to function in the rat brain and is intended as a prototype for the eventual development of higher brain prostheses. The hippocampus is chosen because it is considered the most organized and structured part of the brain and is the most studied area. Its function is to encode storage experiences as long-term memories elsewhere in the brain.

In 2004 Thomas DeMarse at the University of Florida used a culture of 25,000 neurons taken from the rat brain to fly a F-22 fighter jet simulator. After collection, cortical neurons are cultured in petri dishes and quickly begin to reconnect themselves to form living nerve tissue. The cells are arranged on a grid of 60 electrodes and are used to control the pitch and yaw functions of the simulator. The focus of the research is on understanding how the human brain performs and studies computational tasks at the cellular level.

Informed consent in implantable brain-computer interface research ...
src: www.csne-erc.org


Ethical considerations

The important ethical, legal and social issues associated with the brain-computer interface are:

  • conceptual issues (researchers disagree about what and what is not a brain-computer interface),
  • get informed consent from people who have difficulty communicating,
  • risk/benefit analysis,
  • the BCI team's shared responsibility (eg how to ensure that responsible group decisions can be made),
  • the consequences of BCI technology for the quality of life of patients and their families,
  • side effects (eg neurofeedback of sensorimotor rhythm training reported to affect sleep quality),
  • personal responsibility and possible obstacles (eg who is responsible for wrong actions with neuroprosthesis),
  • issues about personality and personality and possible changes,
  • obscures the division between man and machine
  • therapeutic applications and possible exceedances,
  • the ethical questions of research that arise when advancing from animal experiments to applications in human subjects,
  • read minds and privacy,
  • mind control,
  • the use of technology in advanced interrogation techniques by government authorities,
  • selective increase and social stratification.
  • communication
  • to the media.

In its current form, much of BCI is far from the ethical issues discussed above. They are actually similar to corrective therapy in function. Clausen stated in 2009 that "BCI poses an ethical challenge, but it is conceptually similar to what bioethicists have discussed for other realms of therapy". In addition, he points out that bioethics is well prepared to deal with emerging problems with BCI technology. Haselager and colleagues point out that BCI's efficacy and value expectations play a major role in ethical analysis and how BCI scientists should approach the media. Furthermore, standard protocols can be implemented to ensure ethical informed consent procedures with locked patients.

The BCI case currently has parallels in medicine, as does its evolution. Just as pharmaceutical science began as a balance for disorders and is now used to improve focus and reduce the need for sleep, BCIs are likely to change gradually from therapy to improvement. Researchers are well aware that good ethical guidance, properly moderated enthusiasm in media coverage and education on BCI systems will be crucial to people's acceptance of this technology. Thus, recent efforts have been made within the BCI community to create consensus on ethical guidelines for the research, development and dissemination of BCI.

AAC: Brain-Computer Interface - YouTube
src: i.ytimg.com


BCI-based research and clinical interface

Some companies have produced high-end systems that have been widely used in BCI laboratories that have been around for several years. These systems typically require more channels than the lower-cost systems below, with much higher signal quality and toughness in real-world settings. Some systems from new companies have gained attention for new BCI applications for new user groups, such as people with strokes or coma.

  • In 2011, Nuamps EEG from www.neuroscan.com was used to study the level of brain signals that could be detected from stroke patients performing motor images using BCI in large clinical trials, and the results showed that the majority of patients (87 %) can use BCI.
  • In March 2012 g.tec introduced intendiX-SPELLER, the first commercial BCI system available for home use that can be used to control games and computer applications. Can detect different brain signals with 99% accuracy. has organized several workshop tours to demonstrate intendX systems and other hardware and software to the public, such as a US West Coast workshop tour during September 2012.
  • In 2012, the Italian startup company Liquidweb s.r.l., released "Braincontrol", the first prototype of the BCI-based AAC, designed for locked patients. It is validated from 2012 and 2014 with the involvement of LIS and CLIS patients. In 2014 the company introduced a commercial version of the product, with a class I CE mark as a medical device.

Brain Computer Interface Virtual Reality with EEG signals
src: lifeboat.com


Low cost BCI based interface

Recently a number of companies have downgraded the medical grade EEG technology (and in one case, NeuroSky, rebuilding technology from the ground up) to create a cheap BCI. This technology has been built into toys and games; some of these toys are very commercially successful such as NeuroSky and Mattel MindFlex.

  • In 2006 Sony patented a nervous system interface that allows radio waves to affect signals in the neural cortex.
  • In 2007, NeuroSky released the first affordable consumer-based EEG along with the NeuroBoy game. It is also the first large-scale EEG device that uses dry sensor technology.
  • In 2008, OCZ Technology developed the device for use in video games that primarily rely on electromyography.
  • In 2008 Final Fantasy Square Enix developer announced that he partnered with NeuroSky to create a game, Judecca.
  • In 2009 Mattel partnered with NeuroSky to release Mindflex, a game that uses EEGs to steer the ball over the obstacles. By far the best consumer-based EEG to date.
  • In 2009 Uncle Milton Industries partnered with NeuroSky to release Star Wars Force Trainer, a game designed to create the illusion of owning The Force.
  • In 2009 Emotiv released EPOC, an EEG 14 channel device that can read 4 mental states, 13 conscious states, facial expressions, and head movements. EPOC is the first commercial BCI to use dry sensor technology, which can be moistened with saline solution for better connections.
  • In November 2011, Time Magazine chose "necomimi" produced by Neurowear as one of the best inventions of the year. The company announced that it will launch a consumer version of the garment, which consists of cat-like ears that are controlled by brain wave readers manufactured by NeuroSky, in spring 2012.
  • In February 2014, They Will Walk (a non-profit organization that keeps building an exoskeleton, dubbed LIFESUITs, for paralysis and paralysis) initiates a partnership with James W. Shakarji on the development of a wireless BCI.
  • In 2016, a group of fans developed an open-source SMI board that sends nerve signals to the smartphone audio jack, lowering entry-level BCI costs up to Ã, Â £ 20. Basic diagnostic software available for Android devices, as well as text entry apps for Unity.

Engineering brain-computer interfaces to regain control of ...
src: i.ytimg.com


Future direction

A consortium of 12 European partners has completed a roadmap to support the European Commission in their funding decision for the new Horizon 2020 framework program. The project, funded by the European Commission, begins in November 2013 and ends in April 2015. The roadmap is now completed , and can be downloaded on the project's web page. The 2015 publication led by Dr. Clemens Brunner explains some of the analysis and achievements of this project, as well as the emergence of the Brain-Computer Interface Society. For example, this article reviews work on this project that defines BCI and further applications, explores current trends, addresses ethical issues, and evaluates the various directions for the new BCI. As an article note, their new roadmap is generally widespread and supports the recommendations of the Future BNCI project administered by Dr. Brendan Allison, who expressed great enthusiasm for the newly emerging BCI direction.

In addition, other recent publications have explored the most promising future BCI clues for newly disabled user groups (e.g.,). Some prominent examples are summarized below.

Awareness disturbance (DOC)

Some people have impaired consciousness (DOC). This situation is defined to include people with coma, as well as people in a vegetative state (VS) or a minimum conscious state (MCS). The new BCI research seeks to help people with DOCs in different ways. The primary initial goal is to identify patients who are able to perform basic cognitive tasks, which of course leads to a change in their diagnosis. That is, some people diagnosed with DOC may actually be able to process information and make important life decisions (such as whether seeking therapy, residence, and their views on life-long decisions about them). Some people diagnosed with DOC die as a result of the final decision of life, which may be made by family members who sincerely feel this is in the best interest of the patient. Given that new prospects allow patients to give their views on this decision, there appears to be a strong ethical pressure to develop this research direction to ensure that DOC patients are given the opportunity to decide whether they want to live.

This article and other articles describe new challenges and solutions for using BCI technology to help people with DOC. One of the main challenges is that these patients can not use BCI based on vision. Therefore, new tools depend on auditory and/or vibrotactile stimuli. Patients may wear headphones and/or vibrotactile stimulators placed on the wrists, neck, legs, and/or other locations. Another challenge is that patients can pass out in and out of consciousness, and can only communicate at certain times. This can indeed be the cause of misdiagnosis. Some patients may only respond to physician requests for several hours per day (which may be unpredictable) and thus may not be responsive during diagnosis. Therefore, the new method relies on an easy-to-use tool in a field setting, even without the help of an expert, so that family members and others without medical or technical background can still use it. This reduces cost, time, skill requirements, and other expenses by DOC assessment. Automated tools can ask simple questions that patients can easily answer, such as "Is your father named George?" or "Are you born in the USA?" Automatic instructions inform patients that they can deliver yes or no by (for example) focusing their attention on stimuli on the right wrist vs. left. This focused attention produces reliable changes in EEG patterns that can help determine that patients can communicate. The results can be presented to doctors and therapists, which can lead to revised diagnosis and therapy. In addition, these patients can then be provided with BCI-based communication tools that can help them address basic needs, adjust bedding and HVAC (heating, ventilation and air conditioning), and instead empower them to make key life decisions and communicate.

This research effort is supported in part by various EU-funded projects, such as the DECODER project led by Prof. Andrea Kuebler at the University of Wuerzburg. This project contributes to the first BCI system developed for DOC assessment and communication, called mindBEAGLE. The system is designed to help unskilled users work with DOC patients, but is not meant to replace medical staff. An EU-funded project started in 2015 called ComAlert conducts further research and development to improve DOC predictions, assessments, rehabilitation and communications, called "PARC" in the project. Another project funded by the National Science Foundation was led by Ole

Source of the article : Wikipedia

Comments
0 Comments