Human motion recognition is essential for many biomedical applications, but few studies compare the abilities of multiple sensing modalities. This paper thus evaluates the effectiveness of different modalities when predicting targets of human reaching movements. Electroencephalography, electrooculography, camera-based eye tracking, electromyography, hand tracking and the user’s preferences are used to make predictions at different points in time. Prediction accuracies are calculated based on data from 10 subjects in within-subject crossvalidation. Results show that electroencephalography can make predictions before limb motion onset, but its accuracy decreases as the number of potential targets increases. Electromyography and hand tracking give high accuracy, but only after motion onset. Eye tracking is robust and gives high accuracy at limb motion onset. Combining multiple modalities can increase accuracy, though not always. While many studies have evaluated individual sensing modalities, this study provides quantitative data on many modalities at different points of time in a single setting. The information could help biomedical engineers choose the most appropriate equipment for a particular application.
Looking for publications? You might want to consider searching on the EPFL Infoscience site which provides advanced publication search capabilities.
Several design strategies for rehabilitation robotics have aimed to improve patients’ experiences using motivating and engaging virtual environments. This paper presents a new design strategy: enhancing patient freedom with a complex virtual environment that intelligently detects patients’ intentions and supports the intended actions. A `virtual kitchen’ scenario has been developed in which many possible actions can be performed at any time, allowing patients to experiment and giving them more freedom. Remote eye tracking is used to detect the intended action and trigger appropriate support by a rehabilitation robot. This approach requires no additional equipment attached to the patient and has a calibration time of less than a minute. The system was tested on healthy subjects using the ARMin III arm rehabilitation robot. It was found to be technically feasible and usable by healthy subjects. However, the intention detection algorithm should be improved using better sensor fusion, and clinical tests with patients are needed to evaluate the system’s usability and potential therapeutic benefits.