Part 1: Kei Okada starts his first talk with a short introduction on the history of humanoid robotics research at JSK and presents various former projects such as HARP (Human Autonomous Robot Project) . He then continues to explore knowledge representation of everyday activities and knowledge-based object localization before concluding with motion imitation for robots. The compact and thorough presentation is suitable for beginners.
PhD-Students
Lectures
Follow Michael Beetz' talk on the exciting topic of digital twin knowledge bases. The term digital twins refers to virtual, AI-based images of physical objects in the real world. It is an emerging technology and plays a crucial role for the Industry 4.0 and the digitization of manufacturing in several domains. In retail, for example, digital twins show an exact digital replica of the store and warehouse and the location of each product. In his comprehensive talk, Michael Beetz focuses on the aspect of knowledge representation. He proposes a hybrid reasoning system that couples simulation-based with symbolic reasoning and wants to demonstrate the gain of such a combination.
Michael Suppa from Roboception GmbH gives useful insights into robot perception applications in real-world environments. Roboception provides 3D vision hardware and software solutions that enable industrial robotic systems to perceive their environments in real-time. His talk introduces sensing principles, confidence and errror modelling, as well as pose estimation and SLAM (simultaneous localization and mapping). He also lists the requirements for real-world perception and manipluation systems in industrial environments. His informative and application-oriented talk is suitable for beginners.
Part 2: In his very engaging follow-up lecture, Markus Vincze's continues to discuss tasks for robot vision (detection, grasping, and placing objects) in situated environments. In this talk, he demonstrates 3D object modelling and stresses the importance of robotics simulations, especially training robots the orientations of objects for grasping actions. He presents various approaches to object recognition and provides an introduction to deep learning. The talk is also suitable for beginners.
Part 1: In the first part of his highly interesting lecture, Markus Vincze gives us useful insights into robotics visions and presents his vision of domestic robots. As an expert on 2D and 3D vision, he trains robots to understand the functions of objects and how they can help humans in everyday life situations. He shortly introduces two EU projects, HOBBIT and Squirrel, on domestic robots before diving deeper into tasks for robot vision in real-world environments (detection, grasping, placing). The lecture is suitable for beginners.
Tools and Datasets
The video is a tutorial showing the basics of the CRAM framework, which is a toolbox for designing, implementing, and deploying software on autonomous robots. The aim of the tutorial is to (1) give an intuition of what knowledge the robot needs to execute even a simple fetch and place, (2) show how many different things can go wrong and teach writing simple failure handling strategies, and (3) make the user familiar with the API of the actions already implemented in the CRAM framework.
Bio:
Gayane (shortly Gaya) is a PhD student at the Institute for Artificial Intelligence of the University of Bremen. Her main research interests are concentrated in the area of cognition-enabled robot executives. She is currently actively and passionately involved in the development of CRAM. Before joining Michael Beetz's group in November 2013, she worked for one year as a research assistant at Kastanienbaum GmbH (now Franka Emika) with Sami Haddadin, in tight collaboration with the Robotics and Mechatronics Center of DLR. Before that, she has acquired her M.Sc. degree in Informatics with a major in AI and Robotics at the Technical University of Munich. Before coming to Germany, she had a number of short-term jobs in the fields of iPhone game development and web development. She's got her B.Eng. degree in Informatics with a major in Information Security from the State Engineering University of Armenia.
openEASE is a web-based Knowledge Processing Service for Robots and Robotics/AI Researchers.
pracmln is a toolbox for statistical relational learning and reasoning and as such also includes tools for standard graphical models. pracmln is a statistical relational learning and reasoning system that supports efficient learning and inference in relational domains. pracmln has started as a fork of the ProbCog toolbox and has been extended by latest developments in learning and reasoning by the Institute for Artificial Intelligence at the University of Bremen, Germany.
KnowRob is a knowledge processing system that combines knowledge representation and reasoning methods with techniques for acquiring knowledge and for grounding the knowledge in a physical system and can serve as a common semantic framework for integrating information from different sources. KnowRob combines static encyclopedic knowledge, common-sense knowledge, task descriptions, environment models, object information and information about observed actions that has been acquired from various sources (manually axiomatized, derived from observations, or imported from the web). It supports different deterministic and probabilistic reasoning mechanisms, clustering, classification and segmentation methods, and includes query interfaces as well as visualization tools.
CRAM (Cognitive Robot Abstract Machine) is a software toolbox for the design, the implementation, and the deployment of cognition-enabled autonomous robots performing everyday manipulation activities. CRAM equips autonomous robots with lightweight reasoning mechanisms that can infer control decisions rather than requiring the decisions to be preprogrammed. This way CRAM-programmed autonomous robots are much more flexible, reliable, and general than control programs that lack such cognitive capabilities. CRAM does not require the whole domain to be stated explicitly in an abstract knowledge base. Rather, it grounds symbolic expressions in the knowledge representation into the perception and actuation routines and into the essential data structures of the control programs.