Trying out different behaviours is one of the classic learning methods. Success or failure decides which behaviour is adopted. This principle can be transferred to the world of robots.

Copyright by www.qualitymag.com

 

SwissCognitiveAt the Institute for Intelligent Process Automation and Robotics of the Karlsruhe Institute of Technology (KIT), the Robot Learning Group (ROLE) focuses on various aspects of machine learning. The scientists are investigating how robots can learn to solve tasks by trying them out independently. These methods are used in particular for learning object manipulation, for example for grasping objects in a typical bin picking scenario. An Ensenso N10 3D camera directly at the “head” of the robot provides the required image data.

The gripping of randomly lying objects is a central task, especially in industrial automation. However, current bin picking solutions are often inflexible and strongly adapted to the workpiece to be gripped. The research projects of the Robot Learning Group promise a remedy, e.g. with robots that independently learn to pick up previously unknown objects from a container. In order to learn such a task, the robot first begins with random gripping attempts, as a human would do. A neural net connects the 3D images taken with the successful or unsuccessful gripping attempts. For each image, the gripping result, which was determined by a force sensor in the gripper, is stored. The AI (artificial intelligence) uses the stored data to identify meaningful gripping points for the objects and thus “trains” itself. As is usual with modern methods of reinforcement learning (strengthening learning in the machine area, in which a strategy is learned independently supported by rewards), large amounts of data and many gripping attempts are essential for this. However, the researchers at KIT were able to significantly reduce the number of the latter and thus also shorten the time required for learning.

The right grip reduces training time

In contrast to analytical (or model-based) gripping methods, the ROLE robot does not need to have the features required for recognition described in advance. However, it plays an important role in how often the system has been able to successfully capture an object with “similar” images. The grip that the robot tries out is critical for faster learning success. With the help of a neural network, gripping results can be predicted using existing knowledge.

“For a well-functioning system, we currently need about 20,000 gripping experiments, which corresponds to about 80 hours of training time on the robot,” explains Lars Berscheid, researcher at KIT and part of the Robot Learning Group. These figures are approximate values and depend on many factors, such as the gripping rate of random grips, which in turn is influenced, among other things, by the component geometry. As it is common with learning systems, the amount of data available is the limiting factor for the system’s capabilities. […]

 

Read more – www.qualitymag.com


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!