Skip to main content

Full text: Fusing ROV-based photogrammetric underwater imagery with multibeam soundings for reconstructing wrecks in turbid waters

Underwater photogrammetry 
HN 116 — 06/2020 
-2-1 
Fig. 5: Coloured dense point cloud of starboard side 
132,516 tie points and used for all further process 
ing steps ©,g...4). Using unprocessed imagery, only 
440 Images were aligned with a significantly small 
er covered area on the ship hull. Thus, for further 
processing, the enhanced imagery was used. 
It is observable that the exterior orientations 
form a long stretched trajectory along the ship 
hull, starting on the right hand side. On the far left 
side, the distance to the object Increases signifi 
cantly, resulting in several images that could not 
be aligned due to turbid visibility and thus Insuf 
ficient valid observed tie points In the Images. 
The RMS (root mean sguare) of the reprojection 
error was 1.50 px, eguivalent to 1.5 mm in object 
space, assuming an average acgulsltlon distance of 
1 m. This rather low accuracy for photogrammetric 
applications probably originated from high turbid 
ity and low contrast in the underwater imagery 
and a long stretched object that posed a dead 
reckoning problem. However, this postulated ac 
curacy only refers to the internal accuracy of the 
image bundle after adjustment. An exterior ac 
curacy by comparing length measurement errors 
or cloud-to-cloud distances to a reference is not 
possible, as only a monocular system is used. Thus, 
neither an independent absolute scale can be pro 
vided nor was it possible to position a static refer 
ence object near the wreck with which to control 
results. However, if no significant deviations exist in 
a cloud-to-cloud comparison, it can be concluded 
that the photogrammetric point cloud achieves at 
least the same accuracy as the MBES data does, 
which is to be expected. 
Using the aligned sparse point cloud, dense im 
age matching was performed. According to the 
work of Remondlno et al. (2013), a method simi 
lar to semi-global matching (Hirschmuller 2008) 
is used, though this has not been confirmed by 
Agisoft. Using the integrated algorithm, a dense 
point cloud consisting of 12.3 million points is cre 
ated using »High« quality settings in the software. 
Processing, Including the calculation of depth 
maps, took approximately four hours on an Intel 
Core ¡7 with 16 GB RAM. The resulting point cloud 
is shown in .Fig.©. 
3 Data fusion 
F.ig..6 shows the flow chart of the data fusion in 
cluding all preprocessing steps before registration. 
Both photogrammetric and MBES data sets were 
adjusted Individually in their respective coordinate 
systems and then combined by point cloud analy 
sis methods. Absolute scaling was provided by 
the MBES data, thus providing absolute scale for 
photogrammetric data as well. In a practical solu 
tion and in order to control results, an independ 
ent scaling would be desirable by e.g. using a sec 
ond camera for stereo analysis. Two independent 
point clouds can then be fused, deviations in scale 
evaluated, and possible errors identified. 
Firstly, the two point clouds were coarsely 
aligned by selecting salient points. For that, the 
front tip of the wreck's bow and other salient points 
on the top of the wreck's starboard side were 
manually measured in the imagery and 3D coordi 
nates determined. This process could be automa 
tised by automatically identifying salient features 
in a resulting point cloud or applying a reference 
Fig. 6: Data processing and fusion to combine data from both photogrammetric analysis and MBES
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.