Intelligent Robots for Improving the Quality of Life The National Centre of Competence in Research (NCCR) Robotics is a Swiss nationwide organisation funded by the Swiss National Science Foundation… Read more
Looking for publications? You might want to consider searching on the EPFL Infoscience site which provides advanced publication search capabilities.
Modern wearable robots are not yet intelligent enough to fully satisfy the demands of endusers, as they lack the sensor fusion algorithms needed to provide optimal assistance and react quickly to perturbations or changes in user intentions. Sensor fusion applications such as intention detection have been emphasized as a major challenge for both robotic orthoses and prostheses. In order to better examine the strengths and shortcomings of the field, this paper presents a review of existing sensor fusion methods for wearable robots, both stationary ones such as rehabilitation exoskeletons and portable ones such as active prostheses and full-body exoskeletons. Fusion methods are first presented as applied to individual sensing modalities (primarily electromyography, electroencephalography and mechanical sensors), and then four approaches to combining multiple modalities are presented. The strengths and weaknesses of the different methods are compared, and recommendations are made for future sensor fusion research.
Objective. Brain-machine interfaces (BMIs) have been proposed in closed-loop applications for neuromodulation and neurorehabilitation. This study describes the impact of different feedback modalities on the performance of an EEG-based BMI that decodes motor imagery (MI) of leg flexion and extension. Approach. We executed experiments in a lower-limb gait trainer (the legoPress) where nine able-bodied subjects participated in three consecutive sessions based on a crossover design. A random forest classifier was trained from the offline session and tested online with visual and proprioceptive feedback, respectively. Post-hoc classification was conducted to assess the impact of feedback modalities and learning effect (an improvement over time) on the simulated trial-based performance. Finally, we performed feature analysis to investigate the discriminant power and brain pattern modulations across the subjects. Main Results. (i) For real-time classification, the average accuracy was 62.33 ± 4.95% and 63.89 ± 6.41% for the two online sessions. The results were significantly higher than chance level, demonstrating the feasibility to distinguish between MI of leg extension and flexion. (ii) For post-hoc classification, the performance with proprioceptive feedback (69.45 ± 9.95%) was significantly better than with visual feedback (62.89 ± 9.20%), while there was no significant learning effect. (iii) We reported individual discriminate features and brain patterns associated to each feedback modality, which exhibited differences between the two modalities although no general conclusion can be drawn. Significance. The study reported a closed-loop brain-controlled gait trainer, as a proof of concept for neurorehabilitation devices. We reported the feasibility of decoding lower-limb movement in an intuitive and natural way. As far as we know, this is the first online study discussing the role of feedback modalities in lower-limb MI decoding. Our results suggest that proprioceptive feedback has an advantage over visual feedback, which could be used to improve robot-assisted strategies for motor training and functional recovery.
Background: One of the current challenges in brain-machine interfacing is to characterize and decode upper limb kinematics from brain signals, e.g. to control a prosthetic device. Recent research work states that it is possible to do so based on low frequency EEG components. However, the validity of these results is still a matter of discussion. In this paper, we assess the feasibility of decoding upper limb kinematics from EEG signals in center-out reaching tasks during passive and active movements. Methods: The decoding of arm movement was performed using a multidimensional linear regression. Passive movements were analyzed using the same methodology to study the influence of proprioceptive sensory feedback in the decoding. Finally, we evaluated the possible advantages of classifying reaching targets, instead of continuous trajectories. Results: The results showed that arm movement decoding was significantly above chance levels. The results also indicated that EEG slow cortical potentials carry significant information to decode active center-out movements. The classification of reached targets allowed obtaining the same conclusions with a very high accuracy. Additionally, the low decoding performance obtained from passive movements suggests that discriminant modulations of low-frequency neural activity are mainly related to the execution of movement while proprioceptive feedback is not sufficient to decode upper limb kinematics. Conclusions: This paper contributes to the assessment of feasibility of using linear regression methods to decode upper limb kinematics from EEG signals. From our findings, it can be concluded that low frequency bands concentrate most of the information extracted from upper limb kinematics decoding and that decoding performance of active movements is above chance levels and mainly related to the activation of cortical motor areas. We also show that the classification of reached targets from decoding approaches may be a more suitable real-time methodology than a direct decoding of hand position.
Human motion recognition is essential for many biomedical applications, but few studies compare the abilities of multiple sensing modalities. This paper thus evaluates the effectiveness of different modalities when predicting targets of human reaching movements. Electroencephalography, electrooculography, camera-based eye tracking, electromyography, hand tracking and the user’s preferences are used to make predictions at different points in time. Prediction accuracies are calculated based on data from 10 subjects in within-subject crossvalidation. Results show that electroencephalography can make predictions before limb motion onset, but its accuracy decreases as the number of potential targets increases. Electromyography and hand tracking give high accuracy, but only after motion onset. Eye tracking is robust and gives high accuracy at limb motion onset. Combining multiple modalities can increase accuracy, though not always. While many studies have evaluated individual sensing modalities, this study provides quantitative data on many modalities at different points of time in a single setting. The information could help biomedical engineers choose the most appropriate equipment for a particular application.
Brain-machine interfaces (BMIs) enable humans to interact with devices by modulating their brain signals. Despite impressive technological advancements, several obstacles remain. The most commonly used BMI control signals are derived from the brain areas involved in primary sensory- or motor-related processing. However, these signals only reflect a limited range of human intentions. Therefore, additional sources of brain activity for controlling BMIs need to be explored. In particular, higher-order cognitive brain signals, specifically those encoding goal-directed intentions are natural candidates for enlarging the repertoire of BMI control signals and making them more efficient and intuitive. Thus, this paper identifies the prefrontal brain area as a key target region for future BMIs, given its involvement in higher-order, goal-oriented cognitive processes.