Determining Ground Elevation

A convolutional neural network (CNN) method to identify and locate static vegetation using drone-based high-resolution orthoimages.

The developed CNN-based image classification models are supplemented with an overlapping disassembling algorithm to generate 8 × 8-pixel, 16 × 16-pixel, 32 × 32-pixel, or 64 × 64-pixel small-patches as model inputs.

The training datasets are 10 pairs of 1,536 × 1,536-pixel orthoimage and label-image dataset.

Experimental results show that cropping a high-resolution image into 9,025 overlapped 32 × 32-pixel small-patches (with a site size of 17.28 × 17.28 cm2) for image classification and assembling the small-patch label-image predictions to a patch-wise label-image prediction, has the average pixel accuracy of 92.6% in identifying objects on the experimental site.

A vegetation-removing algorithm is designed to divide the label-image prediction into 36,864 nonoverlapping 8 × 8-pixel patches and traverse them in 192 row-loops and 191 column-loops.

The testing results show vegetation in label-images are modified with the “truth” ground elevation and verified with two datasets obtained on different dates.

The measured elevation differentials are close to the measured vegetation heights on the experimental site.

This research has advanced the drone-based orthoimaging method in construction site surveying, which can automatically identify the static obstacles and determine the ground elevations more accurately.

https://doi.org/10.1061/(ASCE)CP.1943-5487.0000930