We aim at developing autonomous miniature hovering flying robots capable of navigating in unstructured GPS-denied environments. A major challenge is the miniaturization of the embedded sensors and processors that allow such platforms to fly by themselves. In this paper, we propose a novel ego-motion estimation algorithm for hovering robots equipped with inertial and optic-flow sensors that runs in real- time on a microcontroller and enables autonomous flight. Unlike many vision-based methods, this algorithm does not rely on feature tracking, structure estimation, additional dis- tance sensors or assumptions about the environment. In this method, we introduce the translational optic-flow direction constraint, which uses the optic-flow direction but not its scale to correct for inertial sensor drift during changes of direction. This solution requires comparatively much sim- pler electronics and sensors and works in environments of any geometry. Here we describe the implementation and per- formance of the method on a hovering robot equipped with eight 0.65 g optic-flow sensors, and show that it can be used for closed-loop control of various motions.
Looking for publications? You might want to consider searching on the EPFL Infoscience site which provides advanced publication search capabilities.
A new method for the estimation of ego-motion (the direction and amplitude of the velocity) of a mobile device comprising optic-flow and inertial sensors (hereinafter the apparatus). The velocity is expressed in the apparatus’s reference frame, which is moving with the apparatus. The method relies on short-term inertial navigation and the direction of the translational optic- flow in order to estimate ego-motion, defined as the velocity estimate (that describes the speed amplitude and the direction of motion).A key characteristic of the invention is the use of optic- flow without the need for any kind of feature tracking. Moreover, the algorithm uses the direction of the optic-flow and does not need the amplitude, thanks to the fact that the scale of the velocity is solved by the use of inertial navigation and changes in direction of the apparatus.
We aim at developing autonomous miniature hov- ering flying robots capable of navigating in unstructured GPS- denied environments. A major challenge is the miniaturization of the embedded sensors and processors allowing such platforms to fly autonomously. In this paper, we propose a novel ego-motion estimation algorithm for hovering robots equipped with inertial and optic-flow sensors that runs in real- time on a microcontroller. Unlike many vision-based methods, this algorithm does not rely on feature tracking, structure estimation, additional distance sensors or assumptions about the environment. Key to this method is the introduction of the translational optic-flow direction constraint (TOFDC), which does not use the optic-flow scale, but only its direction to correct for inertial sensor drift during changes of direction. This solution requires comparatively much simpler electronics and sensors and works in environments of any geometries. We demonstrate the implementation of this algorithm on a miniature 46g quadrotor for closed-loop position control.
In this paper, we introduce Vision Tape (VT), a novel class of flexible compound-eye-like linear vision sensor dedicated to motion extraction and proximity estimation. This novel sensor possesses intrinsic mechanical flexibility that provides wide-range adaptive shape, allowing adjustable field of view as well as integration with numerous substrates and curvatures. VT extracts Optic Flow (OF) of the visual scene to calculate the motion vector, which allows proximity estimation based on the motion parallax principle. In order to validate the functionality of VT, we have designed and fabricated an exemplary prototype consisting of an array of eight photodiodes attached to a flexible PCB that acts as mechanical and electrical support. This prototype performs image acquisition and processing with an integrated microcontroller at a frequency of 1000 fps, even during bending of the sensor. With this, the effect of VT shape on motion perception and proximity estimation is studied and, in particular, the effect of pixel-to-pixel angle is discussed. The results of these experiments allow estimating an optimal configuration of the sensor for OF extraction. Subsequently, a method that enhances the quality of extracted OF for non-optimal configurations is proposed. The experimental results show that, by applying the proposed method to VT in a suboptimal curvature, the quality of the OF can be increased by up to 176% and proximity estimation by 178%.