Probabilistic robotics is a relatively new area of robotics and concerned with robot perception and robot manipulation in the face of uncertainty and incomplete knowledge about the world. In his thorough and concise talk, Daniel Nyga introduces the basics of probability theory. He further shows probabilistic graphical models, including Bayesian networks and Markov Random Fields, explores statistical relational learning using Markov logic networks, and concludes with probabilistic natural language understanding.
In their lecture, Jürgen Sturm and Christoph Schütte from Google Germany talk about the Google’s Cloud Robotics project before diving deeper into specific robot perception problems. Christoph Schütte introduces Cartographer, a system that provides real-time simultaneous localization and mapping, also called SLAM, in 2D and 3D across multiple platforms. Jürgen Sturm closes the lecture with semantic mapping and spatial intelligence in artificial intelligence. In a nutshell, spatial intelligence describes the ability to understand and remember spatial relations among objects or space and allows humans to navigate in their environment and to perform jobs. The same applies to robots and their understanding of their environment. Their educational and application-oriented lecture is suitable for beginners.
Part 2: In his follow-up lecture, David Vernon dives deeper into the role of memory, especially in a system. Memory, as he states, is a process rather than a state. He presents different types of memory, explores the role of memory, explains the concept of self-projection, prospection, and internal simulation, and clarifies technical terminology from cognitive science and psychology as well as from robotic literature. He concludes with internal simulation combined with action, and with the concept of forgetting which is important but still not fully understood in neuroscience. His exceedingly interesting talk is suitable for beginners.
Part 1: In his first of three talks, David Vernon gives a concise and coherent overview of cognitive architecture. He begins by explaining the concept of cognition as an umbrella term that encompasses perception, learning, anticipation, action, and adaptation. Cognition allows robots to work independently in challenging environments, to adapt to changes, and to anticipate events in preparing their actions. If cognition was the top of a mountain and the goal to be achieved, architecture would be the base camp that needs to be set up first. Listen to his interesting lecture that is also suitable for beginners.
Part 1: In the opening talk of the first EASE Fall School, Michael Beetz discusses the wide range of topics around cognition-enabled robotics. He explains the challenges and complexity of building and programming a robot that reaches the same level of efficiency in performing everyday tasks than humans do. Listen to his thorough introduction on knowledge representation and reasoning, and logic-based knowledge representation and reasoning in particular, on manipulation intellligence, and on the research approach of the Collaborative Research Center EASE.
Part 3: In this presentation Axel Ngonga explains with the help of a cooking robot the interesting approach of the class expression learning with multiple representations.
Integrating advanced skills into robotic systems for reliable autonomous operation is a pressing challenge. Current robot software architectures, many based on the 3-layers paradigm, are not flexible to dynamically adapt to changes in the environment, the mission or the robot itself (e.g. internal faults).
In this talk, I will motivate the application of systems engineering in robotics to address this challenge.
I will present our results leveraging systems engineering knowledge for automatic reasoning to adapt the robot software architecture at runtime, with examples in mobile manipulators and underwater robots.
To conclude, I will briefly present our upcoming projects, CoreSense and METATOOL, where we will develop hybrid cognitive architectures with deep understanding and self-awareness capabilities, hopefully resulting in robots that are more flexible, reliable and explainable, and that are capable of tool invention, respectively.
In this course, David Vernon presents the 10th module of the "Design & Implementation of Cognition-enabled Robot Agents" series. The learning goals of this module enables you to:
- Describe the three paradigms of cognitive science
- Explain the characteristics of cognitive architecture in each of the three paradigms
- Describe the key components of a hybrid cognitive architecture
- Sketch the design of the ISAC and CRAM cognitive architectures and explain how they operate
In four lectures cognitive architectures are explained and how cognitive agents can use these architectures.
THE SYNTAX OF ACTION
Abstract Understanding human activity from a video is a fundamental problem in today’s Computer Vision and Imitation Learning. The video discusses the issue of the syntax of human activity and advances the viewpoint that perceived human activity first needs to be parsed, just as in the case of language. Using these ideas then the video proposes the Ego-OMG framework. Egocentric object manipulation graphs are graphs that are extracted from a basic parsing of a video of human activity (they represent the contacts of the left and right hand with objects in the scene) and they can be used for action prediction.
Yiannis Aloimonos is a Professor of Computational Vision and Intelligence at the Department of Computer Science, University of Maryland, College Park, and the Director of the Computer Vision Laboratory at the Institute for Advanced Computer Studies (UMIACS). He is also affiliated with the Institute for Systems Research and the Neural and Cognitive Science Program. He was born in Sparta, Greece, and studied Mathematics in Athens and Computer Science at the University of Rochester, NY (PhD 1990). He is interested in Active Perception and the modeling of vision as an active, dynamic process for real-time robotic systems. For the past five years, he has been working on bridging signals and symbols, specifically on the relationship of vision to reasoning, action, and language.
Path planning is a core problem in robotics. Lydia Kavraki developed a method called the Probabilistic Roadmap Method (PRM), which caused a paradigm shift in the robotics community. The approach introduced randomization schemes that exploited local geometric properties and produced efficient solutions without fully exploring the underlying search space.
Lydia E. Kavraki is the Noah Harding Professor of Computer Science, professor of Bioengineering, professor of Electrical and Computer Engineering, and professor of Mechanical Engineering at Rice University. She is also the Director of the Ken Ken- nedy Institute at Rice. In robotics and AI, she is interested in enabling robots to work with people and in support of people.