Skip to main content

Part 2: This presentation introduces an exciting new architecture called KnowRob by Michael Beetz, which allows knowledge representation and reasoning for robot agents.

Date recorded
2021-11-11

Part 7: In this presentation Maria Hedblom explains a topic that has already been touched upon in the previous parts. It is about the perception of robots of objects in a room. This time, however, it is also about being able to recognize certain events. For this purpose image schemas are used to realize this recognition. 

Date recorded
2021-11-10

Part 5: In this presentation, Jean Bapiste Weibel tells all about the exciting topic of "Vision for Robotics". Some questions about how a robot perceives objects or a whole room will be clarified. This is essential because otherwise it would be difficult to interact with objects or in a room. 

Date recorded
2021-11-09

Part 4: This presentation is about how a robot decides on its actions in everyday life. For this purpose, there are some important exciting cognetive models that David Vernon explains and compares them with each other.

Date recorded
2021-11-09

Part 2: In his equally interesting follow-up lecture, Animesh Garg continues to explore compositional planning and multi-step reasoning, i.e. when a robot is supposed to do multiple tasks in a certain structure. He also examines robot perception via structured learning through instruction videos, and tackles the question of how to collect the data required for robot learning.

Date recorded
2020-09-18

In his captivating lecture, Frank Guerin examines the question about the integration between robot vision and a deeper understanding of tasks. As robot grasping is still an unsolved problem, he explores why and how human perception of objects is relevant to manipulation and explains what "transferrable toddler skills" are. The lecture is suitable for beginners.

Date recorded
2019-09-20

From small to complex, from robot vacuum cleaner to self-driving car: every robotic system needs some sort of perceptual capabilities in order to perceive information from its environment and to understand how it can manipulate it. Perception can come in many forms. Tim Patten gives a highly interesting introduction on how robots deal with object identification: what is it? (recognition), what type is it? (classification), where is it? (object detection), and how do I manipulate it? (grasping). The talk is suitable for beginners.

Date recorded
2019-09-20

Michael Beetz provides an educational introduction to CRAM, Cognitive Robot Abstract Machine. How can we write a robot control program where the robot receives instructions for the performance of a task and is able to produce the behavior neccessary to accomplish the task? This simple question is not fully answered yet as there is still an information gab between instruction and body motion that has to be filled in a semantically meaningful manner. One way is simplifying perception tasks and implementing motion contrains. Follow Michael Beetz's interesting approach to metacognition.

Date recorded
2019-09-18

Part 1: In his first noteworthy lecture, Animesh Garg presents his vision of building intelligent robotic assistants that learn with the same amount of efficiency and generality than humans do through learning algorithms, particularly in robot manipulation. Humans learn through instruction or imitation and can adapt to new situations by drawing from experience. The goal is to have robotic systems recognize new objects in new environments autonomously (diversity) and enable them to do things they were not trained to do by using long-term reasoning (complexity). Animesh Garg introduces the approach to "learning with structured inductive bias and priors", i.e. the ability to generalize beyond the training data.

Date recorded
2019-09-17

Part 2: Rachid Alamis' second presentation continues with a dive into the exciting topic of Human-Robot Interaction (HRI). When humans interact with each other, for example by giving a pen to someone else, they exchange (verbal/non-verbal) signals. Rachid Alami gives a very good, short introduction to human-human interaction before exploring the challenges of adopting joint action between humans for human-robot interaction. He presents multiple "decisional ingredients" for interactive autonomous robot assistants.

Date recorded
2019-09-17