Skip to main content

PhD students

Content suitable for PhD students

An Introduction to Robot Perception

From small to complex, from robot vacuum cleaner to self-driving car: every robotic system needs some sort of perceptual capabilities in order to perceive information from its environment and to understand how it can manipulate it. Perception can come in many forms. Tim Patten gives a highly interesting introduction on how robots deal with object identification: what is it? (recognition), what type is it? (classification), where is it? (object detection), and how do I manipulate it? (grasping). The talk is suitable for beginners.

Data for Robot Manipulation

Part 2: In his equally interesting follow-up lecture, Animesh Garg continues to explore compositional planning and multi-step reasoning, i.e. when a robot is supposed to do multiple tasks in a certain structure. He also examines robot perception via structured learning through instruction videos, and tackles the question of how to collect the data required for robot learning.

Cognitive Robot Abstract Machine

Michael Beetz provides an educational introduction to CRAM, Cognitive Robot Abstract Machine. How can we write a robot control program where the robot receives instructions for the performance of a task and is able to produce the behavior neccessary to accomplish the task? This simple question is not fully answered yet as there is still an information gab between instruction and body motion that has to be filled in a semantically meaningful manner. One way is simplifying perception tasks and implementing motion contrains. Follow Michael Beetz's interesting approach to metacognition.

Generalizable Autonomy in Robot Manipulation

Part 1: In his first noteworthy lecture, Animesh Garg presents his vision of building intelligent robotic assistants that learn with the same amount of efficiency and generality than humans do through learning algorithms, particularly in robot manipulation. Humans learn through instruction or imitation and can adapt to new situations by drawing from experience. The goal is to have robotic systems recognize new objects in new environments autonomously (diversity) and enable them to do things they were not trained to do by using long-term reasoning (complexity).

Some Decisional Challenges for Human-Robot Joint Action

Part 2: Rachid Alamis' second presentation continues with a dive into the exciting topic of Human-Robot Interaction (HRI). When humans interact with each other, for example by giving a pen to someone else, they exchange (verbal/non-verbal) signals. Rachid Alami gives a very good, short introduction to human-human interaction before exploring the challenges of adopting joint action between humans for human-robot interaction. He presents multiple "decisional ingredients" for interactive autonomous robot assistants.

Task Instantiation from Life-long Memories of Mobile Robots

Part 2: In his second lecture, Kei Okada discusses episodic memory. It describes the collection of past personal experiences in comparison to semantic memory that refers to the general knowledge about the world humans accumulate throughout their life. In order to achieve a goal like tidying up objects, a robot has to rely on acquired knowledge about where to find objects and what to do with them.

On Decisional Abilities for a Cognitive and Interactive Robot

Part 1: In the first part of his captivating lecture, Rachid Alami discusses decisional abilities required for Human-Robot Interaction (HRI) and Human-Robot Collaboration in particular. The challenge is to develop and build cognitive and interactive abilities to allow robots to perform collaborative tasks with humans, not for humans. The first part centers on the introduction to Human-Robot Joint Actions and the problems of combining tasks planning (what to do) with motion planning (how to do it), especially for grasping, and how they can be solved.

Humanoid Robots in Everyday Activities

Part 1: Kei Okada starts his first talk with a short introduction on the history of humanoid robotics research at JSK and presents various former projects such as HARP (Human Autonomous Robot Project) . He then continues to explore knowledge representation of everyday activities and knowledge-based object localization before concluding with motion imitation for robots. The compact and thorough presentation is suitable for beginners.

Digital Twin Knowledge Bases

Follow Michael Beetz' talk on the exciting topic of digital twin knowledge bases. The term digital twins refers to virtual, AI-based images of physical objects in the real world. It is an emerging technology and plays a crucial role for the Industry 4.0 and the digitization of manufacturing in several domains. In retail, for example, digital twins show an exact digital replica of the store and warehouse and the location of each product. In his comprehensive talk, Michael Beetz focuses on the aspect of knowledge representation.