Part 16: The Ubica Robotics team presents a new way to take inventory of goods and other practical retail applications, with the help of revolutionary robots.
Part 15: Benjamin Alt explains in his presentation everything worth knowing about industrialization with robots. Thereby not only already existing application areas are mentioned, where robots are currently used in factories, but also work flow optimizations are mentioned, which are important to be able to build well adapted robots.
Part 14: Self-learning algorithms are taking on an increasingly important role in people's lives. In his exciting talk, Carlos Hernandez Corbato presents how robots can continue to develop independently.
Part 13: Britta Wrede explains in her exciting presentation the differences how a robot learns compared to a human. She also breaks through certain prejudices in this topic with very interesting examples.
Part 12: In this presentation Katharina Rohlfing introduces the concept of Pragmatic Frames capturing the collaborative and multimodal nature of language use. For this purpose, she shows various challenges that are derived from the different application examples.
Part 10: The interaction between people is often comprehensible and very natural. A newborn learns any kind of interaction by just imitating and trying it out. But what about robots? What role does interaction play in cognitive robots? Alessandra Sciutti's exciting presentation addresses exactly this topic and answers these questions.
Part 9: Learning is a topic that concerns us all. We learn new things every day or repeat what we have already learned. But what is the best way to learn? And can we map this knowledge onto algorithms? John Laird addresses these questions in his exciting presentation and tries to give us answers.
Part 8: This presentation is about common sense knowledge & knowledge graphs. In his exciting presentation, Philipp Cimiano not only illustrates what common sense actually is, but also gives interesting examples with the use of short stories.
Part 7: Digital twin systems are an exciting new topic with many different applications. The presentation by Tetsunari Inamura not only introduces a number of them, but also uses concrete examples to show how they can enhance people's lives.
Part 6: Robots should not only support us at home or on land, but also under water. Peter Kampmann talks about fascinating examples of applications and tricky tasks that could be solved with the help of robots.
Part 5: This presentation is about how a robot decides on its actions in everyday life. For this purpose, there are some important exciting cognetive models that David Vernon explains and compares them with each other.
Part 1: In this presentation, Michael Beetz takes it a bit further and starts with some basics. In order to understand what cognitive-enabled robotics is, it is necessary to first understand what exactly robots are. In addition, it is necessary to understand what cognitive means as well. All these exciting topics will be covered in this presentation.
Part 4: In this presentation John Leird talks about the integration of perception and action in the soar cognitive architecture. In addition, exciting theses concerning cognition in psychology are included.
A commonsense reasoning framework for dynamic knowledge invention via conceptual combination and blending.
Part 2: In this lecture Antonio Lieto tells interesting facts about commonsense reasoning frameworks for dynamic knowledge invention via conceptual combination and blending. For this purpose, real problems and approaches to solutions are used and addressed.
Part 1: The first talk will start with a short review of some basics of AI and computer science. Michael Beetz will also present some interesting examples in the world of cognition-enabled manipulation.
Part 11: In this presentation, Michael Beetz talks about Knowledge representation & reasoning for cognition-enabled robot manipulation. He goes through all the necessary stages step by step and explains them with the help of an example based on a kitchen robot.
Part 9: In this presentation Michael Suppa shows exciting application areas where problems of robots in daily life can be recognized. The focus is on how the problems with the perception of the robots come about and how they can be solved.
Part 8: Unlike the previous parts, this presentation is about a more theoretical general problem and less about examples of robots and their functions. Pedro Lima presents how robots are benchmarked and uses a probabilistic approach as an example.
Event-predictive active Interface: Developing Conceptual, Compositional cognition from sensorimotor experiences
Part 6: In this presentation, Martin Butz explains a very exciting approach to creating compositional cognition in robots that draw on Sensorimotor experiences. Therefore sensorimotors are presented and explained how the experience can be used for interesting developing concepts.
Part 3: This presentation will clarify current and upcoming technologies around Cognitively-enabled robotics. In the process, Michael Beetz addresses the interesting questions "Where are we?" and "Where are we going?".
Part 2: This presentation introduces an exciting new architecture called KnowRob by Michael Beetz, which allows knowledge representation and reasoning for robot agents.
Part 7: In this presentation Maria Hedblom explains a topic that has already been touched upon in the previous parts. It is about the perception of robots of objects in a room. This time, however, it is also about being able to recognize certain events. For this purpose image schemas are used to realize this recognition.
Part 5: In this presentation, Jean Bapiste Weibel tells all about the exciting topic of "Vision for Robotics". Some questions about how a robot perceives objects or a whole room will be clarified. This is essential because otherwise it would be difficult to interact with objects or in a room.
Action selection and execution in everyday activities: A cognitive robotics and situation model perspective
Part 4: This presentation is about how a robot decides on its actions in everyday life. For this purpose, there are some important exciting cognetive models that David Vernon explains and compares them with each other.
Part 2: In his equally interesting follow-up lecture, Animesh Garg continues to explore compositional planning and multi-step reasoning, i.e. when a robot is supposed to do multiple tasks in a certain structure. He also examines robot perception via structured learning through instruction videos, and tackles the question of how to collect the data required for robot learning.
In his captivating lecture, Frank Guerin examines the question about the integration between robot vision and a deeper understanding of tasks. As robot grasping is still an unsolved problem, he explores why and how human perception of objects is relevant to manipulation and explains what "transferrable toddler skills" are. The lecture is suitable for beginners.
From small to complex, from robot vacuum cleaner to self-driving car: every robotic system needs some sort of perceptual capabilities in order to perceive information from its environment and to understand how it can manipulate it. Perception can come in many forms. Tim Patten gives a highly interesting introduction on how robots deal with object identification: what is it? (recognition), what type is it? (classification), where is it? (object detection), and how do I manipulate it? (grasping). The talk is suitable for beginners.
Michael Beetz provides an educational introduction to CRAM, Cognitive Robot Abstract Machine. How can we write a robot control program where the robot receives instructions for the performance of a task and is able to produce the behavior neccessary to accomplish the task? This simple question is not fully answered yet as there is still an information gab between instruction and body motion that has to be filled in a semantically meaningful manner. One way is simplifying perception tasks and implementing motion contrains. Follow Michael Beetz's interesting approach to metacognition.
Part 1: In his first noteworthy lecture, Animesh Garg presents his vision of building intelligent robotic assistants that learn with the same amount of efficiency and generality than humans do through learning algorithms, particularly in robot manipulation. Humans learn through instruction or imitation and can adapt to new situations by drawing from experience. The goal is to have robotic systems recognize new objects in new environments autonomously (diversity) and enable them to do things they were not trained to do by using long-term reasoning (complexity). Animesh Garg introduces the approach to "learning with structured inductive bias and priors", i.e. the ability to generalize beyond the training data.
Part 2: Rachid Alamis' second presentation continues with a dive into the exciting topic of Human-Robot Interaction (HRI). When humans interact with each other, for example by giving a pen to someone else, they exchange (verbal/non-verbal) signals. Rachid Alami gives a very good, short introduction to human-human interaction before exploring the challenges of adopting joint action between humans for human-robot interaction. He presents multiple "decisional ingredients" for interactive autonomous robot assistants.
Part 2: In his second lecture, Kei Okada discusses episodic memory. It describes the collection of past personal experiences in comparison to semantic memory that refers to the general knowledge about the world humans accumulate throughout their life. In order to achieve a goal like tidying up objects, a robot has to rely on acquired knowledge about where to find objects and what to do with them. In a very educational way, Kei Okada further addresses the concepts of task instantiation and experience accumulation mechanism, meaning the formation of patterns and routines in task completion.
Part 1: In the first part of his captivating lecture, Rachid Alami discusses decisional abilities required for Human-Robot Interaction (HRI) and Human-Robot Collaboration in particular. The challenge is to develop and build cognitive and interactive abilities to allow robots to perform collaborative tasks with humans, not for humans. The first part centers on the introduction to Human-Robot Joint Actions and the problems of combining tasks planning (what to do) with motion planning (how to do it), especially for grasping, and how they can be solved.
Part 1: Kei Okada starts his first talk with a short introduction on the history of humanoid robotics research at JSK and presents various former projects such as HARP (Human Autonomous Robot Project) . He then continues to explore knowledge representation of everyday activities and knowledge-based object localization before concluding with motion imitation for robots. The compact and thorough presentation is suitable for beginners.
Follow Michael Beetz' talk on the exciting topic of digital twin knowledge bases. The term digital twins refers to virtual, AI-based images of physical objects in the real world. It is an emerging technology and plays a crucial role for the Industry 4.0 and the digitization of manufacturing in several domains. In retail, for example, digital twins show an exact digital replica of the store and warehouse and the location of each product. In his comprehensive talk, Michael Beetz focuses on the aspect of knowledge representation. He proposes a hybrid reasoning system that couples simulation-based with symbolic reasoning and wants to demonstrate the gain of such a combination.
Michael Suppa from Roboception GmbH gives useful insights into robot perception applications in real-world environments. Roboception provides 3D vision hardware and software solutions that enable industrial robotic systems to perceive their environments in real-time. His talk introduces sensing principles, confidence and errror modelling, as well as pose estimation and SLAM (simultaneous localization and mapping). He also lists the requirements for real-world perception and manipluation systems in industrial environments. His informative and application-oriented talk is suitable for beginners.
Part 2: In his very engaging follow-up lecture, Markus Vincze's continues to discuss tasks for robot vision (detection, grasping, and placing objects) in situated environments. In this talk, he demonstrates 3D object modelling and stresses the importance of robotics simulations, especially training robots the orientations of objects for grasping actions. He presents various approaches to object recognition and provides an introduction to deep learning. The talk is also suitable for beginners.
Part 1: In the first part of his highly interesting lecture, Markus Vincze gives us useful insights into robotics visions and presents his vision of domestic robots. As an expert on 2D and 3D vision, he trains robots to understand the functions of objects and how they can help humans in everyday life situations. He shortly introduces two EU projects, HOBBIT and Squirrel, on domestic robots before diving deeper into tasks for robot vision in real-world environments (detection, grasping, placing). The lecture is suitable for beginners.
Moritz Tenorth, CTO at the start-up Magazino GmbH, talks about mobile pick-and-place robots in industrial working environments. Magazino develops and builds customized industrial robots and robot platforms mainly used in logistics that serve as robot assistants to humans, for example to factory workers to keep their work environment safe and efficient. Moritz Tenorth gives us an idea of how challenging these tasks are and how they can be solved. He also shares his advice for a successful transition from academia to industry and the work in a real startup environment. Follow his vivid lecture that is also suitable for beginners.
Part 2: In his follow-up lecture, Michael Beetz gives a short recap of his first talk before further exploring knowledge representation and reasoning for robotic agents. He focuses on one of the main problems of human-scale manipulation tasks for robotic systems, action description, when it comes to the performance of abstract tasks like "pour the water out". From "grasp the pot by the handles" to "tilt the pot around the axis between the handles" to "hold the lid while pouring", every action includes multiple intermediate tasks that have to be described in detail for the robot. Follow his comprehensive lecture that covers many aspects of cognition-enabled robotics.
Part 3: In his third and final lecture, David Vernon discusses recent developments in cognition research. He addresses the common model of cognition that emcompasses approaches in Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics. It is mainly based on the book "Unified theories of cognition" by Allen Newell, a leading investigator in computer science and cognitive psychology. Newell states that cognition takes place over multiple timescales (from millisecond-level to year-level and everything in between). Follow David Vernon's gripping lecture that is also suitable for beginners.
Probabilistic robotics is a relatively new area of robotics and concerned with robot perception and robot manipulation in the face of uncertainty and incomplete knowledge about the world. In his thorough and concise talk, Daniel Nyga introduces the basics of probability theory. He further shows probabilistic graphical models, including Bayesian networks and Markov Random Fields, explores statistical relational learning using Markov logic networks, and concludes with probabilistic natural language understanding.
In their lecture, Jürgen Sturm and Christoph Schütte from Google Germany talk about the Google’s Cloud Robotics project before diving deeper into specific robot perception problems. Christoph Schütte introduces Cartographer, a system that provides real-time simultaneous localization and mapping, also called SLAM, in 2D and 3D across multiple platforms. Jürgen Sturm closes the lecture with semantic mapping and spatial intelligence in artificial intelligence. In a nutshell, spatial intelligence describes the ability to understand and remember spatial relations among objects or space and allows humans to navigate in their environment and to perform jobs. The same applies to robots and their understanding of their environment. Their educational and application-oriented lecture is suitable for beginners.
Part 2: In his follow-up lecture, David Vernon dives deeper into the role of memory, especially in a system. Memory, as he states, is a process rather than a state. He presents different types of memory, explores the role of memory, explains the concept of self-projection, prospection, and internal simulation, and clarifies technical terminology from cognitive science and psychology as well as from robotic literature. He concludes with internal simulation combined with action, and with the concept of forgetting which is important but still not fully understood in neuroscience. His exceedingly interesting talk is suitable for beginners.
Part 1: In his first of three talks, David Vernon gives a concise and coherent overview of cognitive architecture. He begins by explaining the concept of cognition as an umbrella term that encompasses perception, learning, anticipation, action, and adaptation. Cognition allows robots to work independently in challenging environments, to adapt to changes, and to anticipate events in preparing their actions. If cognition was the top of a mountain and the goal to be achieved, architecture would be the base camp that needs to be set up first. Listen to his interesting lecture that is also suitable for beginners.
Part 1: In the opening talk of the first EASE Fall School, Michael Beetz discusses the wide range of topics around cognition-enabled robotics. He explains the challenges and complexity of building and programming a robot that reaches the same level of efficiency in performing everyday tasks than humans do. Listen to his thorough introduction on knowledge representation and reasoning, and logic-based knowledge representation and reasoning in particular, on manipulation intellligence, and on the research approach of the Collaborative Research Center EASE.
Part 3: In this presentation Axel Ngonga explains with the help of a cooking robot the interesting approach of the class expression learning with multiple representations.
Integrating advanced skills into robotic systems for reliable autonomous operation is a pressing challenge. Current robot software architectures, many based on the 3-layers paradigm, are not flexible to dynamically adapt to changes in the environment, the mission or the robot itself (e.g. internal faults).
In this talk, I will motivate the application of systems engineering in robotics to address this challenge.
I will present our results leveraging systems engineering knowledge for automatic reasoning to adapt the robot software architecture at runtime, with examples in mobile manipulators and underwater robots.
To conclude, I will briefly present our upcoming projects, CoreSense and METATOOL, where we will develop hybrid cognitive architectures with deep understanding and self-awareness capabilities, hopefully resulting in robots that are more flexible, reliable and explainable, and that are capable of tool invention, respectively.
In this course, David Vernon presents the 10th module of the "Design & Implementation of Cognition-enabled Robot Agents" series. The learning goals of this module enables you to:
- Describe the three paradigms of cognitive science
- Explain the characteristics of cognitive architecture in each of the three paradigms
- Describe the key components of a hybrid cognitive architecture
- Sketch the design of the ISAC and CRAM cognitive architectures and explain how they operate
In four lectures cognitive architectures are explained and how cognitive agents can use these architectures.
We acknowledge the support from the KI-Campus Projekt funded by the German Federal Ministry of Education and Research.
THE SYNTAX OF ACTION
Abstract Understanding human activity from a video is a fundamental problem in today’s Computer Vision and Imitation Learning. The video discusses the issue of the syntax of human activity and advances the viewpoint that perceived human activity first needs to be parsed, just as in the case of language. Using these ideas then the video proposes the Ego-OMG framework. Egocentric object manipulation graphs are graphs that are extracted from a basic parsing of a video of human activity (they represent the contacts of the left and right hand with objects in the scene) and they can be used for action prediction.
Yiannis Aloimonos is a Professor of Computational Vision and Intelligence at the Department of Computer Science, University of Maryland, College Park, and the Director of the Computer Vision Laboratory at the Institute for Advanced Computer Studies (UMIACS). He is also affiliated with the Institute for Systems Research and the Neural and Cognitive Science Program. He was born in Sparta, Greece, and studied Mathematics in Athens and Computer Science at the University of Rochester, NY (PhD 1990). He is interested in Active Perception and the modeling of vision as an active, dynamic process for real-time robotic systems. For the past five years, he has been working on bridging signals and symbols, specifically on the relationship of vision to reasoning, action, and language.
Path planning is a core problem in robotics. Lydia Kavraki developed a method called the Probabilistic Roadmap Method (PRM), which caused a paradigm shift in the robotics community. The approach introduced randomization schemes that exploited local geometric properties and produced efficient solutions without fully exploring the underlying search space.
Lydia E. Kavraki is the Noah Harding Professor of Computer Science, professor of Bioengineering, professor of Electrical and Computer Engineering, and professor of Mechanical Engineering at Rice University. She is also the Director of the Ken Ken- nedy Institute at Rice. In robotics and AI, she is interested in enabling robots to work with people and in support of people.
There’s a common misconception that decisions made by computers are automatically unbiased – as opposed to those made by humans. However, Chad Jenkins pointed out many ways in which AI can fail to deliver fair and reasonable results. He pointed out what needs to be done in AI to get the intellectual domain right and how the technology and understanding researchers generate can have a positive impact on the world.
Chad Jenkins is a Professor of Computer Science and Engineering at the University of Michigan as well as the Associate Director of the Robotics Institute and the Editor-in-Chief of the journal “ACM Transactions on Human-Robot Interaction”. His research interests include mobile manipulation, computer vision, interactive robot systems, and human-robot interaction.
Jan Andersen is Head of Research Office at the University of Southern Denmark. He has a background in Computer Science and Danish Language. He has been working with research strategy and research planning. He was involved in building up four very successful research support units. He was an advisor for Rectors of the Danish Technical University, University of Copenhagen, and the former Royal Veterinary and Agricultural University. He was responsible for the cross-faculty follow-up of the Danish university merger in 2007. He was Head of the Nordic Association of University Administrators Working Group for Research Administrators. Jan Andersen is an expert on the EU framework programs. Jan Andersen was a Board Member and co-founder of the Danish Association of Research Managers and Administrators. Jan Andersen was hosting the 2009 Annual Conference of European Association of Research Managers and Administrators, EARMA and elected Chairman for EARMA 2010-2013, and board member until 2018. From 2013-2018 Jan Andersen was Chair of the COST BESTPRAC Targeted Network, with more than 650 participants from 41 countries. Jan Andersen is the co-author of “Research Management – Europe and beyond”. Presentation abstract: The changing environment of research towards “Open Science”, competitive and collaborative research projects, influences the careers of young researchers. Competition, quality and the necessity of meeting societal challenges and other “non-academic” requirements in the pursue of funding highlight the need for external advice and counseling. Here comes your local research manager and administrator to rescue. Research managers and administrators facilitate the research process from the idea to the realization of the research project. We can be a sparring partner on your career, help you identify funding, and explain – sometimes even solve the non-academic parts in research application e.g. impact, gender issues, open science, and ethics. This presentation discusses the emergence of professional support staff, and how you can benefit from involving your local research manager and administrator.
The tools available at our disposal for solving some of the really hard problems our society is facing today are old, inadequate, and need to be deprecated. Everyone is looking to Artificial Intelligence based solutions like it’s the next gold rush, without understanding how Machine Learning actually works. We’re pumping massive amounts of data into these systems, without realizing that this data came before Artificial Intelligence was a thing, and even before the Internet or computers existed: 2D images are actually just digital representations of something that was available before in analog form. Someone needs to pause, zoom out, and take a look first and foremost at the problems that we need to solve, identify and analyze them, and only then derive complete technical solutions that might or might not involve the current generation of Machine Learning en vogue, but most importantly, might indicate that new types and formats of data, whether visual or otherwise, need to be created. And the field that made significant progress there is robotics, where mapping and identifying the world with high accuracy was an absolute requirement for the stability and performance of a machine moving into our world. However, these concepts such as 3D visual representations have not yet been translated fully to scale and made de facto standards for other more common applications. In this talk, we’re taking a trip down memory lane at some of these concepts, and discussing how open source platforms such as the Point Cloud Library (PCL) have contributed to the proliferation of new visual understanding technologies. We’re also taking a look at Fyusion, which attempts to redefine the meaning of “scalable 3D visual formats”, and has created the first comprehensive and scalable technology stack for capturing photorealistic 3D spatial models of the real world using a single camera, built with Visual Understanding in mind.
In this talk, I review the Soar cognitive architecture, including the motivation for cognitive architecture, the history of Soar, the applications it has been applied to, and our current research on Interactive Task Learning. I then discuss Soar from the standpoint of an open-source research project: positives, negatives, and challenges.
This course is published by David Vernon in 2020. The course "covers both the essentials of classical robotics (mobile robots, robot arms for manipulation, and robot vision) and the principles of cognition (cognitive architectures, learning and development, prospection, memory, knowledge representation, internal simulation, and meta-cognition).
It brings these components together by working through some recent advances in robotics for everyday activities, and by including practical and detailed material based on the CRAM (Cognitive Robot Abstract Machine) cognitive architecture, incorporating the KnowRob knowledge base, building on ROS (Robot Operating System) and exploiting functional, object-oriented, and logic programming to reason about and execute under-specified tasks in everyday activities.
The course emphasizes both theory and practice and makes use of physical robots and robot simulators for visual sensing and actuation." [David Vernon, 2020]