Document Type
Article
Language
eng
Publication Date
12-2021
Publisher
American Society of Civil Engineers
Source Publication
Journal of Performance of Constructed Facilities
Source ISSN
0887-3828
Abstract
This paper presents an accurate and stable method for object and defect detection and visualization on building and infrastructural facilities. This method uses drones and cameras to collect three- dimensional (3D) point clouds via photogrammetry, and uses orthographic or arbitrary views of the target objects to generate the feature images of points’ spectral, elevation, and normal features. U-Net is implemented in the pixelwise segmentation for object and defect detection using multiple feature images. This method was validated on four applications, including on-site path detection, pavement cracking detection, highway slope detection, and building facade window detection. The comparative experimental results confirmed that U-Net with multiple features has a better pixelwise segmentation performance than separately using each single feature. The developed method can implement object and defect detection with different shapes, including striped objects, thin objects, recurring and regularly shaped objects, and bulky objects, which will improve the accuracy and efficiency of inspection, assessment, and management of buildings and infrastructural facilities.
Recommended Citation
Jiang, Yuhan; Han, Sisi; and Bai, Yong, "Building and Infrastructure Defect Detection and Visualization Using Drone and Deep Learning Technologies" (2021). Civil and Environmental Engineering Faculty Research and Publications. 280.
https://epublications.marquette.edu/civengin_fac/280
Comments
Accepted version. Journal of Performance of Constructed Facilities, Vol. 35, No. 6 (December 2021). DOI. © 2021 American Society of Civil Engineers. Used with permission.