Course Syllabus (CISC667)
Course Description
This course introduces foundational concepts and technologies in autonomous driving. Topics include perception, planning, control, Robot Operating System 2 (ROS2), and simulation environments.
Students will explore both theoretical foundations and practical implementations of autonomous vehicle systems. Through a combination of lectures and team-based projects, students will gain experience in real-world problem solving, system integration, and collaborative development. Students will also gain hands-on experience using 1/10th scaled autonomous vehicles in lab exercises.
The course provides relevant preparation for careers and research in robotics, embedded systems, intelligent transportation, and AI-driven mobility solutions.
Learning Outcomes
By the end of this course, you will be able to:
Implement basic perception algorithms for autonomous vehicles.
Design planning and control modules in a simulated driving environment.
Use ROS2 to develop and integrate autonomous driving components.
Deploy algorithms on 1/10th scaled autonomous vehicles in a physical lab setting.
Work effectively in teams to develop and test autonomous vehicle systems.
Understand the end-to-end architecture and design considerations for autonomous driving platforms.
Tentative 14-Week Schedule
Week 1 (Aug 27–28): Course Intro, ROS2 Setup
Week 2 (Sep 2–4): ROS2 Fundamentals & Simulation
Week 3 (Sep 9–11): Perception I – Sensors & Models
Week 4 (Sep 16–18): Perception II – Vision & Fusion (Project 1 assigned)
Week 5 (Sep 23–25): Localization & Mapping
Week 6 (Sep 30–Oct 2): Control Theory for AVs
Week 7 (Oct 7–9): Trajectory Planning
Week 8 (Oct 14–16): Planning & Control Integration (Project 1 due)
Week 9 (Oct 21–23): System Architecture (Project 2 assigned)
Week 10 (Oct 28–30): Guest Lecture / Industry View
Week 11 (Nov 4–6): Simulation Lab & Debugging
Week 12 (Nov 11–13): Safety, Testing & Validation (Project 2 due)
Week 13 (Nov 18–20): AV Communication (V2X) (Project 3 assigned)
Week 14 (Dec 2–4): Wrap-up, Team Presentations (Project 3 due Dec 9)
Course Projects
Project 1
Story: You are a perception software engineer in a group at a startup called UDrive. Your group is given a piece of recorded data, including LiDAR, Camera, IMU, and GNSS Data. The recorded data is the vehicle driving in simulation around the STAR Campus.
Goal: The planning team would like to know how far the ego-vehicle (you) is from the stop sign, other vehicles, and pedestrians. They would like to obtain a local measurement in meters (x, y, z) and also a global coordinate in (latitude, longitude, altitude).
Deliverables: CPP or Python Code using ROS2 that publishes the names of detected objects, their distances in x and y, and coordinates in latitude and longitude. Write a 1-page document detailing what you did, what problems you and your group had, and a video demonstrating the results from your code. Also include a summary of individual contributions.
Project 2
Story: Thank you for the awesome work you've put into the first project. The planning team is very impressed. But the planning team has a new task for you. You are given Camera, LiDAR, IMU, GNSS data.
Goal: The planning team would like you to create an outdoor and an indoor map so they can test their code with path planning and control decisions. You and your team need to create an HD-Map used for mapping and localization. You also need to map out driving areas such as driving lanes, bike lanes, signages.
Deliverables: An HD-map including pointcloud data and Lanelet2 map data. HD map points should include location information for indoor/outdoor maps (local position). Include a video of the HD map's localization process. Write a 1-page document detailing what you did, what problems you and your group had. Also include a summary of individual contributions.
