Helmholtz Association

Work Package 3100 – Autonomous Operations

Usually, remote operated systems are used, if distinct areas of interest are not accessible for humans, due to physical or technical reasons (e.g. Offshore Industry). Typically, the human operator controls the system mostly based on video data. This workflow or method of control is comparable to natural human operations – we recognize objects visually and react appropriately based on our evaluation of situation and our decision. A capable data link between operator and robot is mandatory; is that kind of data link not possible/not applicable (e.g. long cable hampers, too large distance), an autonomous system will be used instead. Autonomous systems work completely self-contained, but do have well restricted degrees of freedom with respect to type of operations. The reason for that restriction is their lack of intelligence.

On a daily basis, humans take thousands of decisions – from simple ones up to highly complex ones, based on our senses (seeing, hearing, feeling …) and on our experiences as well. However, this complex human decision-strategy couldn’t be implemented on actual autonomous systems so far. Nevertheless, growing scientific challenges demand more powerful autonomous systems; they should operate in harsh terrain or in high-risk environment (e.g. under ice, dark side of moon) and should be capable to cope with critical situations. This challenge is present for both worlds – lunar and deep sea.

Overall, work package 3100 aims to enhance the technical capabilities of autonomous systems; for example, the intended implementation of a SLAM algorithm (Simultaneous Localization and Mapping) on vehicle control.  In contrast to typical waypoint navigation (from waypoint A to B to C; based on direct path to B and further on to C) the SLAM equipped autonomous system has to operate in an unknown terrain.  While it moves forward, it detects its position continuously, builds a map, notifies obstacles and bypasses them; such a system contains much more autonomy, because it is not moving on a direct predefined path towards waypoint B and C. The longer the vehicle is moving, the more precise the map will be, which itself will be used for decision-making (e.g. optimal routing for way back, handling of obstacles or cliffs).

However, the technique is striking but hard to implement, because the environmental settings and the type of vehicle strongly imprint on the technical figuration and performance. For example, video-based terrain detection is well applicable on the Moon due to excellent visibility, but could fail completely at Deep Sea environment due to poor visibility resulting from floating particles in the water column. In addition, an autonomous wheeled rover easily could stop its movement in case of obstacles ahead, take a rest, analyze the situation and find the best pathway ahead; in contrast, a floating torpedo-shaped AUV (autonomous underwater vehicle) travelling in the water column couldn’t stop, it will float up immediately, because its buoyant. Therefore, in WP 3100 it is intended to use several techniques on different autonomous underwater systems (e.g. Crawler, Rover, AUVs).