No people found

You might want to try browsing by lab or looking in the A-Z people list.

Looking for publications? You might want to consider searching on the EPFL Infoscience site which provides advanced publication search capabilities.

Latency Correction of Event-Related Potentials Between Different Experimental Protocols

  • Authors: Iturrate, Iñaki; Chavarriaga, Ricardo; Montesano, Luis; Minguez, Javier; Millán, José del R.

Objective: A fundamental issue in EEG event-related potentials (ERPs) studies is the amount of data required to have an accurate ERP model. This also impacts the time required to train a classifier for a brain-computer interface (BCI). This issue is mainly due to the poor signal-to-noise ratio, and to the large fluctuations of the EEG caused by several sources of variability. One of these sources is directly related to the experimental protocol or application designed, and may affect to amplitude or latency variations. This usually prevents BCI classifiers to generalize among different experimental protocols. In this work, we analyze the effect of the amplitude and the latency variations among different experimental protocols based on the same type of ERP. Approach: We present a method to analyze and compensate for the latency variations in BCI applications. The algorithm has been tested on two widely used ERPs (P300 and observation error potentials), in three experimental protocols in each case. We report the ERP analysis and single-trial classification. Results and significance: The results obtained show (i) how the experimental protocols significantly affect the latency of the recorded potentials but not the amplitudes, and (ii) how the use of latency-corrected data can be used to generalize the BCIs, reducing this way the calibration time when facing a new experimental protocol.

Posted on: February 23, 2014

Transfer in Inverse Reinforcement Learning for Multiple Strategies

  • Authors: Tanwani, Ajay Kumar; Billard, Aude

We consider the problem of incrementally learning different strategies of performing a complex sequential task from multiple demonstrations of an expert or a set of experts. While the task is the same, each expert differs in his/her way of performing it. We assume that this variety across experts’ demonstration is due to the fact that each expert/strategy is driven by a different reward function, where reward function is expressed as a linear combination of a set of known features. Consequently, we can learn all the expert strategies by forming a convex set of optimal deterministic policies, from which one can match any unseen expert strategy drawn from this set. Instead of learning from scratch every optimal policy in this set, the learner transfers knowledge from the set of learned policies to bootstrap its search for new optimal policy. We demonstrate our approach on a simulated mini-golf task where the 7 degrees of freedom Barrett WAM robot arm learns to sequentially putt on different holes in accordance with the playing strategies of the expert.

Posted on: July 23, 2013