Applied Computer Vision & Robotics with AI for Architecture and Interactive Systems


Syllabus


HARDWARE II is an advanced, practice-oriented course focused on applying real-time
computer vision and artificial intelligence to architectural, robotic, and interactive systems. The course guides students through the full AI vision pipeline, from ethical data collection and dataset design to model training, optimization, and deployment in physical environments. Using tools such as Roboflow, YOLO, OpenCV, ONNX, and ROS2, students develop autonomous systems capable of perceiving, analyzing, and responding to spatial conditions in real time.

Emphasis is placed on transforming visual information into quantitative spatial intelligence, including distance, speed, density, and occupancy metrics relevant to architecture and robotics. Students critically engage with issues of bias, privacy, and responsibility in AI-driven visual systems, particularly within public and built environments. The course culminates in an independent project where students design and implement a fully integrated AI system connecting perception, decision-making, and physical or interactive actuation.

Learning Objectives


By the end of this course, students will be able to:

  1. Design and manage high-quality, ethically responsible datasets for computer vision applications
  2. Train and evaluate real-time object detection models using YOLO
  3. Implement OpenCV-based spatial analysis, tracking, and geometric measurement
  4. Convert visual detections into real-world quantitative metrics
  5. Deploy optimized AI models using ONNX for real-time inference
  6. Integrate AI perception systems with ROS2, robots, Arduino, or interactive architectural interfaces
  7. Analyze system performance, limitations, and failure cases
  8. Design and implement an end-to-end autonomous AI system for architectural or robotic applications

Faculty


Faculty Assistants


Projects from this course

Applied Computer Vision & Robotics with AI for Architecture andInteractive Systems

Hardware 2 cover image

RE:PAIR –Perception-Driven Closed-Loop Robotic Repair Faculty: Hamid Peiro and Aleksandra Kraeva (Sasha) Mission Statement: RE:PAIR is a perception-driven, decision-visible closed-loop robotic system that detects cracks, repairs them using robotic 3D printing, and autonomously verifies the result. 1) Crack detection 2) Segmentation and Masking 3) Centreline and points First we outlayed the workflow of the project. … Read more

Vision-Based Gesture-Controlled Robotic Manipulation System

Team member(s): Elias, Rafik, Seid, Leo and Dhruvil This blog post presents a vision-based gesture-controlled robotic manipulation system developed for intuitive human-robot interaction. The project replaces traditional robot programming interfaces — such as teach pendants and GUIs — with a natural hand gesture pipeline, allowing a human operator to direct a UR10e industrial robotic arm … Read more

CIRCUIT – Hardware II Project

Circular Intelligence for Robotic Classification & Upcycling of Industrial Timber How can robotic system identify, measure and sort reusable construction materials based on predefined constraints? This project proposes a vision-guided robotic system for supporting the reuse of construction materials through automated detection, analysis, and manipulation. A fixed RGB camera in an eye-to-hand configuration observes a … Read more