LiDAR-Based Construction Progress Monitoring with Spot Quadruped

Introduction

This project demonstrates a sponsor-facing deployment of semantic exploration and progress tracking using the Spot quadruped robot.
The system integrates LiDAR, panoramic camera, and IMU data for environment reconstruction and change detection in construction environments.
It was developed and tested in CERLAB at Carnegie Mellon University, serving as an applied prototype that later evolved into the full Semantic Exploration and Dense Mapping framework.


Stage 1: Exploration and Window Detection

The Spot robot autonomously explored mock construction environments while generating voxel maps and detecting target objects (windows) using a customized YOLOv7 model.
Bounding boxes were compared between runs to identify installed vs. uninstalled windows.

Demo 1 – Exploration and window detection

Demo 2 – Map comparison showing detected window installation difference


Stage 2: Dense Mapping and Change Detection

FAST-LIO was integrated for dense background mapping. CloudCompare was used to visualize dense point cloud differences before and after window installation, showing detailed progress tracking. The system was deployed in both simulator and real world.

Simulation verification of UGV exploration

Point cloud difference analysis using CloudCompare

Real-world LiDAR-based exploration test

Real-world mapping results and difference visualization


Stage 3: Integration into the Semantic Exploration Framework

The methods developed in this demo—LiDAR-based mapping, semantic detection, and point cloud differencing—were later upgraded to the Semantic Exploration and Dense Mapping with Panoramic LiDAR–Camera Fusion framework.
This enabled multi-modal semantic mapping, object-level reconstruction, and autonomous exploration for industrial monitoring applications. For more details on the extended framework, see the Semantic Exploration and Dense Mapping project.

System overview
Figure 1 – the complete semantic exploration framework