Intelligent Robots for Improving the Quality of Life The National Centre of Competence in Research (NCCR) Robotics is a Swiss nationwide organisation funded by the Swiss National Science Foundation… Read more
Looking for publications? You might want to consider searching on the EPFL Infoscience site which provides advanced publication search capabilities.
This paper describes a brain-machine interface for the online control of a powered lower-limb exoskeleton based on electroencephalogram (EEG) signals recorded over the user’s sensorimotor cortical areas. We train a binary decoder that can distinguish two different mental states, which is applied in a cascaded manner to efficiently control the exoskeleton in three different directions: walk front, turn left and turn right. This is realized by first classifying the user’s intention to walk front or change the direction. If the user decides to change the direction, a subsequent classification is performed to decide turn left or right. The user’s mental command is conditionally executed considering the possibility of obstacle collision. All five subjects were able to successfully complete the 3-way navigation task using brain signals while mounted in the exoskeleton. We observed on average 10.2% decrease in overall task completion time compared to the baseline protocol.
Error-related EEG potentials (ErrP) can be used for brain-machine interfacing (BMI). Decoding of these signals, indicating subject’s perception of erroneous system decisions or actions can be used to correct these actions or to improve the overall interfacing system. Multiple studies have shown the feasibility of decoding these potentials in single-trial using different types of experimental protocols and feedback modalities. However, previously reported approaches are limited by the use of long inter-stimulus intervals (ISI>2s). In this work we assess if it is possible to overcome this limitation. Our results show that it is possible to decode error-related potentials elicited by stimuli presented with ISIs lower than 1s without decrease in performance. Furthermore, the increase in the presentation rate did not increase the subject workload. This suggests that the presentation rate for ErrP-based BMI protocols using serial monitoring paradigms can be substantially increased with respect to previous works.
Search and rescue, autonomous construction, and many other semi-autonomous multi-robot applications can benefit from proximal interactions between an operator and a swarm of robots. Most research on proximal interaction is based on explicit communication techniques such as gesture and speech. This study proposes a new implicit proximal communication technique to approach the problem of robot selection. We use electroencephalography (EEG) signals to select the robot at which the operator is looking. This is achieved using steady-state visually evoked potential (SSVEP), a repeatable neural response to a regularly blinking visual stimulus that varies predictively based on the blinking frequency. In our experiments, each robot was equipped with LEDs blinking at a different frequency, and the operator’s SSVEP neural response was extracted from the EEG signal to detect and select the robot without requiring any conscious action by the user. This study systematically investigates several parameters affecting the SSVEP neural response: blinking frequency of the LED, distance between the robot and the operator, and color of the LED. Based on these parameters, we study two signal processing approaches and critically analyze their performance on 10 subjects controlling a set of physical robots. Our results show that despite numerous artifacts, it is possible to achieve a recognition rate higher than 85% on some subjects, while the average over the ten subjects was 75%.
The ability to recognize errors is crucial for efficient behavior. Numerous studies have identified electrophysiological correlates of error recognition in the human brain (error-related potentials, ErrPs). Consequently, it has been proposed to use these signals to improve human-computer interaction (HCI) or brain-machine interfacing (BMI). Here, we present a review of over a decade of developments towards this goal. This body of work provides consistent evidence that ErrPs can be successfully detected on a single-trial basis, and that they can be effectively used in both HCI and BMI applications. We first describe the ErrP phenomenon and follow up with an analysis of different strategies to increase the robustness of a system by incorporating single-trial ErrP recognition, either by correcting the machine’s actions or by providing means for its error-based adaptation. These approaches can be applied both when the user employs traditional HCI input devices or in combination with another BMI channel. Finally, we discuss the current challenges that have to be overcome in order to fully integrate ErrPs into practical applications. This includes, in particular, the characterization of such signals during real(istic) applications, as well as the possibility of extracting richer information from them, going beyond the time-locked decoding that dominates current approaches.
Research in brain-computer interfaces has achieved impressive progress towards implementing assistive technologies for restoration or substitution of lost motor capabilities, as well as supporting technologies for able-bodied subjects. Notwithstanding this progress, effective translation of these interfaces from proof-of concept prototypes into reliable applications remains elusive. As a matter of fact, most of the current BCI systems cannot be used independently for long periods of time by their intended end-users. Multiple factors that impair achieving this goal have already been identified. However, it is not clear how do they affect the overall BCI performance or how they should be tackled. This is worsened by the publication bias where only positive results are disseminated, preventing the research community from learning from its errors. This paper is the result of a workshop held at the 6th International BCI meeting in Asilomar. We summarize here the discussion on concrete research avenues and guidelines that may help overcoming common pitfalls and make BCIs become a useful alternative communication device.
One of the challenges in using brain computer interfaces over extended periods of time is the uncertainty in the system. This uncertainty can be due to the user’s internal states, the non stationarity of the brain signals, or the variation of the class discriminative information over time. Therefore, the users are often unable to maintain the same accuracy and time efficiency in delivering BCI commands. In this paper, we tackle the issue of variation in BCI command delivery time for a motor imagery task with the aim of providing assistance through adaptive shared control. This is important mainly because having long delivery of mental commands leads to uncertainty in the user’s intent classification and limits the responsiveness of the system. In order to address this issue, we separate the trials into “long” and “short” groups so that we have the same number of trials in each group. We demonstrate that using only a few samples at the beginning of the trial, we are able to predict whether the current trial will be short or long with high accuracies (70% – 86%). Eventually, this prediction enables us to tune the shared control parameters to overcome the issue of uncertainty.
Providing adaptive shared control for Brain- Computer Interfaces (BCIs) can result in better performance while reducing the user’s mental workload. In this respect, online estimation of accuracy and speed of command delivery are important factors. This study aims at real-time differentiation between fast and slow trials in a motor imagery BCI. In our experiments, we refer to trials shorter than the median of trial lengths as “fast” trials and to those longer than the median as “slow” trials. We propose a classifier for real-time distinction between fast and slow trials based on estimates of the entropy rates for the first 2-3 s of the electroencephalogram (EEG). Results suggest that it can be predicted whether a trial is slow or fast well before a cutoff time. This is important for adaptive shared control especially because 55% to 75% of trials (for the five subjects in this study) are longer than that cutoff time
Early detection of movement intention could possibly minimize the delays in the activation of neuroprosthetic devices. As yet, single trial analysis using non-invasive approaches for understanding such movement preparation remains a challenging task. We studied the feasibility of predicting movement directions in self-paced upper limb center-out reaching tasks, i.e., spontaneous movements executed without an external cue that can better reflect natural motor behavior in humans. We reported results of non-invasive electroencephalography (EEG) recorded from mild stroke patients and able-bodied participants. Previous studies have shown that low frequency EEG oscillations are modulated by the intent to move and therefore, can be decoded prior to the movement execution. Motivated by these results, we investigated whether slow cortical potentials (SCPs) preceding movement onset can be used to classify reaching directions and evaluated the performance using 5-fold cross-validation. For able-bodied subjects, we obtained an average decoding accuracy of 76% (chance level of 25%) at 62.5ms before onset using the amplitude of on-going SCPs with above chance level performances between 875ms to 437.5ms prior to onset. The decoding accuracy for the stroke patients was on average 47% with their paretic arms. Comparison of the decoding accuracy across different frequency ranges (i.e., SCPs, delta, theta, alpha and gamma) yielded the best accuracy using SCPs filtered between 0.1 to 1 Hz. Across all the subjects, including stroke subjects, the best selected features were obtained mostly from the fronto-parietal regions, hence consistent with previous neurophysiological studies on arm reaching tasks. In summary, we concluded that SCPs allow the possibility of single trial decoding of reaching directions at least 312.5ms before onset of reach.
Error-related potentials (ErrP) have been increasingly studied in psychophysical experiments as well as for brain-machine interfacing. In the latter case, the generalisation capabilities of ErrP decoders is a crucial element to avoid frequent recalibration processes, thus increasing their usability. Previous studies have suggested that ErrP signals are rather stable across recording sessions. Also, studies using protocols of serial stimuli presentation show that these potentials do not change significantly with the presentation rate. Here we complement these studies by analysing the decoding generalisation capabilities. Using data from monitoring experiments, we evaluate how much the performance degrades when tested in a condition different than the one the decoder was trained with. Moreover, we compare different spatial filtering techniques to see which preprocessing steps yield less-sensitive features for ErrP decoding.