NCCR Robotics

NCCR Robotics

Intelligent Robots for Improving the Quality of Life The National Centre of Competence in Research (NCCR) Robotics is a Swiss nationwide organisation funded by the Swiss National Science Foundation… Read more

Looking for publications? You might want to consider searching on the EPFL Infoscience site which provides advanced publication search capabilities.

A survey of sensor fusion methods in wearable robotics

  • Authors: Novak, Domen; Riener, Robert

Modern wearable robots are not yet intelligent enough to fully satisfy the demands of endusers, as they lack the sensor fusion algorithms needed to provide optimal assistance and react quickly to perturbations or changes in user intentions. Sensor fusion applications such as intention detection have been emphasized as a major challenge for both robotic orthoses and prostheses. In order to better examine the strengths and shortcomings of the field, this paper presents a review of existing sensor fusion methods for wearable robots, both stationary ones such as rehabilitation exoskeletons and portable ones such as active prostheses and full-body exoskeletons. Fusion methods are first presented as applied to individual sensing modalities (primarily electromyography, electroencephalography and mechanical sensors), and then four approaches to combining multiple modalities are presented. The strengths and weaknesses of the different methods are compared, and recommendations are made for future sensor fusion research.

Posted on: October 22, 2014

Effectiveness of different sensing modalities in predicting targets of reaching movements

  • Authors: Novak, Domen; Omlin, Ximena; Lein-Hess, Rebecca; Riener, Robert

Human motion recognition is essential for many biomedical applications, but few studies compare the abilities of multiple sensing modalities. This paper thus evaluates the effectiveness of different modalities when predicting targets of human reaching movements. Electroencephalography, electrooculography, camera-based eye tracking, electromyography, hand tracking and the user’s preferences are used to make predictions at different points in time. Prediction accuracies are calculated based on data from 10 subjects in within-subject crossvalidation. Results show that electroencephalography can make predictions before limb motion onset, but its accuracy decreases as the number of potential targets increases. Electromyography and hand tracking give high accuracy, but only after motion onset. Eye tracking is robust and gives high accuracy at limb motion onset. Combining multiple modalities can increase accuracy, though not always. While many studies have evaluated individual sensing modalities, this study provides quantitative data on many modalities at different points of time in a single setting. The information could help biomedical engineers choose the most appropriate equipment for a particular application.

Posted on: October 3, 2013