Skip to main content

Full text: Fusing ROV-based photogrammetric underwater imagery with multibeam soundings for reconstructing wrecks in turbid waters

Underwater photogrammetry 
26 
Hydrographische Nachrichten 
Fig. 3: Original image (left) and enhanced Image using LAB algorithm (right) 
sated by image distortion parameters (Menna et 
al. 2016; Nocerino et al. 2016). Otherwise, the ray 
path could be modelled explicitly by applying ray 
tracing approaches. This however, would reguire 
a specific bundle adjustment solution, eliminating 
the option of using standard structure-from-mo- 
tion (SfM) software, as it is commercially available 
to users from administration and industry. 
Images were acguired at a frame rate of 20 Hz. 
However, in order to reduce computational and 
memory effort, images were analysed at 2 Hz. At 
approximately 0.5 m/s lateral movement speed, 
an acguisition distance of 1 m and a ground sam 
pling distance (GSD) of 1 mm has been achieved, 
this leads to an average overlap of 87 % in hori 
zontal and 79 % in vertical direction. This is con 
sidered enough overlap to be able to robustly 
identify identical features over several images in 
a seguence. The total survey time, including dive 
time and time to locate the wreck, was about 
15 min. of which 7.5 min. consisted in the imagery 
acguisition time. 
The Baltic Sea has a high turbidity and therefore 
does not provide very good visibility conditions. 
In order to improve matching results and colour 
correctness, several image enhancement meth 
ods, as proposed in Mangeruga et al. (2018) were 
compared. By far the best results were achieved, 
using the LAB enhancement algorithm, proposed 
by Bianco et al. (2015). Fig._3 displays a wreck fea 
ture viewed in the original image and the same 
feature viewed in an image enhanced by the LAB 
algorithm. The original image is obviously biased 
towards green, which distorts the wreck feature 
and reduces contrast. The enhanced image on 
the other hand still has a green background but 
the wreck feature is more distinguishable from the 
background. This leads to improved matching re 
sults. 
Using the enhanced imagery, photogrammetric 
analyses were performed using structure-from- 
motion (SfM) processing methods. SfM technigues 
(e.g.Snavelyetal.2006; Furukawa and Ponce 2010) 
generate 3D representations from 2D image se- 
guences without initial information. Feature points 
are extracted from the images and matched, 
employing robust estimation technigues such 
as RANSAC (random sample consensus; Fischler 
and Bolles 1981). Using these corresponding im 
age points in multiple images, bundle adjustment 
was performed using a self-calibration approach. 
From this method, the interior orientation was cal 
culated using distortion parameters according to 
Brown (1971), i.e. principal distance, principal point, 
radial-symmetric and decentering distortion, and 
affinity and shear. Simultaneously, values for exte 
rior orientation (6DOF position of camera in object 
space) and 3D coordinates of the object points 
were estimated. The aforementioned steps were 
performed, using Agisoft Metashape, a widely 
used SfM software that has been proven to be ro 
bust in underwater photogrammetry (Mangeruga 
et al. 2018). Unfortunately, due to the commercial 
aspect of the software, no detailed insights are 
provided about the algorithms used for orienta 
tion and subseguent dense image matching (Re- 
mondino et al. 2013). For this image bundle, 606 
images from the starboard side were aligned with 
Fig. 4: RGB-coloured sparse point cloud and camera trajectory (red)
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.