Using Brain Computer Interfaces (BCI) as a way to give people with locked-in syndrome back reliable communication and control capabilities has long been a futuristic trope of medical dramas and sci fi. A team from NCCR Robotics and CNBI, EPFL have recently published a paper detailing work as a step towards taking this technique into everyday lives …
15 Jun – 16 Jun 2017
Building Bodies for Brains & Brains for Bodies & 3rd Japan-EU Workshop on Neurorobotics
|Building Bodies for Brains & Brains for Bodies & 3rd Japan-EU Workshop on Neurorobotics Registration for both events now open.|
9 Oct – 12 Oct 2016
WORKSHOP ON BRAIN-MACHINE INTERFACES (SMC 2016)
Intercontinental Hotel, BUDAPEST, 1052 Budapest
|Please see: https://documents.epfl.ch/users/c/ch/chavarri/www/IEEESMC2016_BMI/BMI-IEEESMC2016.html|
Looking for publications? You might want to consider searching on the EPFL Infoscience site which provides advanced publication search capabilities.
This paper describes a brain-machine interface for the online control of a powered lower-limb exoskeleton based on electroencephalogram (EEG) signals recorded over the user’s sensorimotor cortical areas. We train a binary decoder that can distinguish two different mental states, which is applied in a cascaded manner to efficiently control the exoskeleton in three different directions: walk front, turn left and turn right. This is realized by first classifying the user’s intention to walk front or change the direction. If the user decides to change the direction, a subsequent classification is performed to decide turn left or right. The user’s mental command is conditionally executed considering the possibility of obstacle collision. All five subjects were able to successfully complete the 3-way navigation task using brain signals while mounted in the exoskeleton. We observed on average 10.2% decrease in overall task completion time compared to the baseline protocol.
Motor-disabled end users have successfully driven a telepresence robot in a complex environment using a Brain-Computer Interface (BCI). However, to facilitate the interaction aspect that underpins the notion of telepresence, users must be able to voluntarily and reliably stop the robot at any moment, not just drive from point to point. In this work, we propose to exploit the user’s residual muscular activity to provide a fast and reliable control channel, which can start/stop the telepresence robot at any moment. Our preliminary results show that not only does this hybrid approach increase the accuracy, but it also helps to reduce the workload and was the preferred control paradigm of all the participants.
Independent mobility is core to being able to perform activities of daily living by oneself. However, powered wheelchairs are not an option for a large number of people who are unable to use conventional interfaces, due to severe motor–disabilities. Non-invasive brain–computer interfaces (BCIs) offer a promising solution to this interaction problem and in this article we present a shared control architecture that couples the intelligence and desires of the user with the precision of a powered wheelchair. We show how four healthy subjects are able to master control of the wheelchair using an asynchronous motor–imagery based BCI protocol and how this results in a higher overall task performance, compared with alternative synchronous P300–based approaches.
Background: One of the current challenges in brain-machine interfacing is to characterize and decode upper limb kinematics from brain signals, e.g. to control a prosthetic device. Recent research work states that it is possible to do so based on low frequency EEG components. However, the validity of these results is still a matter of discussion. In this paper, we assess the feasibility of decoding upper limb kinematics from EEG signals in center-out reaching tasks during passive and active movements. Methods: The decoding of arm movement was performed using a multidimensional linear regression. Passive movements were analyzed using the same methodology to study the influence of proprioceptive sensory feedback in the decoding. Finally, we evaluated the possible advantages of classifying reaching targets, instead of continuous trajectories. Results: The results showed that arm movement decoding was significantly above chance levels. The results also indicated that EEG slow cortical potentials carry significant information to decode active center-out movements. The classification of reached targets allowed obtaining the same conclusions with a very high accuracy. Additionally, the low decoding performance obtained from passive movements suggests that discriminant modulations of low-frequency neural activity are mainly related to the execution of movement while proprioceptive feedback is not sufficient to decode upper limb kinematics. Conclusions: This paper contributes to the assessment of feasibility of using linear regression methods to decode upper limb kinematics from EEG signals. From our findings, it can be concluded that low frequency bands concentrate most of the information extracted from upper limb kinematics decoding and that decoding performance of active movements is above chance levels and mainly related to the activation of cortical motor areas. We also show that the classification of reached targets from decoding approaches may be a more suitable real-time methodology than a direct decoding of hand position.
Our brain-actuated wheelchair uses shared control to couple the user input with the contextual information about the surroundings in order to perform natural manoeuvres both safely and efficiently. In this study, we investigate the feasibility of using our brain–controlled wheelchair with patients in a rehabilitation clinic. Both user and system performance metrics are analysed. We find that the driving performance of a motor-disabled patient at the clinic is comparable with the performance of four healthy subjects. All five participants were able to complete the driving task successfully.
Objective: A fundamental issue in EEG event-related potentials (ERPs) studies is the amount of data required to have an accurate ERP model. This also impacts the time required to train a classifier for a brain-computer interface (BCI). This issue is mainly due to the poor signal-to-noise ratio, and to the large fluctuations of the EEG caused by several sources of variability. One of these sources is directly related to the experimental protocol or application designed, and may affect to amplitude or latency variations. This usually prevents BCI classifiers to generalize among different experimental protocols. In this work, we analyze the effect of the amplitude and the latency variations among different experimental protocols based on the same type of ERP. Approach: We present a method to analyze and compensate for the latency variations in BCI applications. The algorithm has been tested on two widely used ERPs (P300 and observation error potentials), in three experimental protocols in each case. We report the ERP analysis and single-trial classification. Results and significance: The results obtained show (i) how the experimental protocols significantly affect the latency of the recorded potentials but not the amplitudes, and (ii) how the use of latency-corrected data can be used to generalize the BCIs, reducing this way the calibration time when facing a new experimental protocol.
Performance variation is one of the main challenges that BCIs are confronted with, when being used over extended periods of time. Shared control techniques could partially cope with such a problem. In this paper, we propose a taxonomy of shared control approaches used for BCIs and we review some of the recent studies at the light of these approaches. We posit that the level of assistance provided to the BCI user should be adjusted in real time in order to enhance BCI reliability over time. This approach has not been extensively studied in the recent literature on BCIs. In addition, we investigate the effectiveness of providing online adaptive assistance in a motor-imagery BCI for a tetraplegic enduser with an incomplete locked-in syndrome in a longitudinal study lasting 11 months. First, we report a reliable estimation of the BCI performance (in terms of command delivery time) using only a window of 1 s in the beginning of trials (AUC 0:8). Second, we demonstrate how adaptive shared control can exploit the output of the performance estimator to adjust online the level of assistance in a BCI game by regulating its speed. In particular, online adaptive assistance was superior to a fixed condition in terms of success rate (p < 0:01). Remarkably, the results exhibited a stable performance over several months without recalibration of the BCI classifier or the performance estimator.
The prospect of controlling devices merely by the power of one’s thoughts is compelling, especially for assistive technology applications. In the accompanying video, we show how we have strived to push brain–computer interface (BCI) technology out of the lab and into the real world, while simultaneously moving away from testing solely with healthy subjects to undertaking trials with patients and potential end–users. We describe the evolution of the motor imagery based BCI, which has resulted in a major milestone: the first patient trial of a motor imagery based BCI controlled wheelchair.
The Sixth International Brain-Computer Interface (BCI) Meeting was held May 30-June 3rd, 2016 at the Asilomar Conference Grounds, Pacific Grove, California, United States. The conference included 28 workshops covering topics in BCI and brain-machine interface research. Topics included BCI for specific populations or applications, advancing BCI research through use of specific signals or technological advances, and translational and commercial issues to bring both implanted and non-invasive BCIs to market. BCI research is growing and expanding in the breadth of its applications, the depth of knowledge it can produce, and the practical benefit it can provide both for those with physical impairments and the general public. Here we provide summaries of each workshop, illustrating the breadth and depth of BCI research and highlighting important issues and calls for action to support future research and development.
Brain–computer interfaces (BCI) (also referred to as brain–machine interfaces; BMI) are, by definition, an interface between the human brain and a technological application. Brain activity for interpretation by the BCI can be acquired with either invasive or non-invasive methods. The key point is that the signals that are interpreted come directly from the brain, bypassing sensorimotor output channels that may or may not have impaired function. This paper provides a concise glimpse of the breadth of BCI research and development topics covered by the workshops of the 6th International Brain–Computer Interface Meeting.