|
Séminaires |
|
20-08-2020 Robot Learning Lab Dept. of Computer Science, Rutgers School of Arts and Sciences Webinaire CeRVIM: Model Identification for Robotic ManipulationRésumé A popular approach in robot learning is model-free reinforcement learning (RL), where a control policy is learned directly from sensory inputs by trial and error without explicitly modeling the effects of the robot’s actions on the controlled objects or system. While this approach has proved to be very effective in learning motor skills, it suffers from several drawbacks in the context of object manipulation due to the fact that types of objects and their arrangements vary significantly across different tasks. An alternative approach that may address these issues more efficiently is model-based RL. A model in RL generally refers to a transition function that maps a state and an action into a probability distribution over possible next states. In this talk, I will present my recent works on data-efficient physics-driven techniques for identifying models of manipulated objects. To perform a task in a new environment with unknown objects, a robot first identifies from sequences of images the 3D mesh models of the objects, as well as their physical properties such as their mass distributions, moments of inertia and friction coefficients. The robot then reconstructs in a physics simulation the observed scene, and predicts the motions of the objects when manipulated. The predicted motions are then used to select a sequence of actions to apply on the real objects. Simulated virtual worlds that are learned from data also offer safe environments for exploration and for learning model-free policies. Biographie: Abdeslam Boularias is an Assistant Professor of computer science at Rutgers, The State University of New Jersey, where he works on robot learning. Previously, he was a Project Scientist in the Robotics Institute of Carnegie Mellon University, and a Research Scientist at the Max Planck Institute for Intelligent Systems in Tübingen, where he worked with Jan Peters, in the Empirical Inference department, which was directed by Bernhard Schölkopf. From January 2006 to July 2010, he was a PhD student at Laval University under the supervision of Brahim Chaib-draa. His PhD thesis focused on reinforcement learning and planning in partially observable environments.
Rencontre Zoom
|
||||
©2002-. Laboratoire de Vision et Systèmes Numériques. Tous droits réservés |