
This hands-on seminar introduces students to 3D perception as a functional component of autonomous robotic systems, with a strong emphasis on system integration and engineering clarity. Rather than treating sensing, reconstruction, and visualization as isolated tasks, the seminar frames 3D spatial information as operational knowledge that enables robots to make decisions and act in the physical world.
Students will explore different sensing techniques and 3D representations, and learn how spatial data flows through an autonomous system—from sensors, to perception pipelines, to decision-making and action. The seminar emphasizes coherent system architecture, explicit data flow, and feasibility-driven implementation.
Working in small groups, students will develop a rapid prototype demonstrating how 3D perception can support a simple autonomous behavior in a mobile robot. The focus is not on building complete navigation systems, but on understanding and implementing the minimal components required to close the perception–action loop with the required assumptions necessary.
Learning Objectives
At course completion the student will be able to:
- Understand the role of 3D sensing and perception within autonomous robotic systems
- Compare different sensing modalities and 3D representations
- Design a basic perception pipeline for spatial understanding
- Integrate perception, reasoning, and decision layers into a coherent system
- Implement a rapid computational prototype using Python and ROS 2
- Critically evaluate system limitations, assumptions, and failure cases
- Clearly communicate system architecture and technical decisions