Monday, May 12, 2014
BICAM SLAM
Joan
Sola talks in his article
on BICAM SLAM explaining the benefits of mounting mono-vision cameras
on SLAM robots equipped for stereo cameras. He summarizes saying "
By using monocular algorithms on both cameras, the advantages of
mono-vision (bearing-only, with infinity range but no 3D instant
information) and stereo-vision (3D information only up to a limited
range) naturally add up to provide interesting possibilities, that
are here developed and demonstrated using an EKF-based monocular
SLAM algorithm. Mainly we obtain: a) fast 3D mapping with long term,
absolute angular references; b) great landmark updating flexibility;
and c) the possibility of stereo rig extrinsic self-calibration,
providing a much more robust and accurate sensor. Experimental
results show the pertinence of the proposed ideas, which should be
easily exportable (and we encourage to do so) to other, more
performing, vision-based SLAM algorithms."
Basically this means that mono-vision cameras, which have a flat lens
allow for an extended field of view. Typically these
mono-vision cameras are uses singularly and have the issue of having
a difficult time with depth perception. However the benefits to
mono vision cameras include no lens distortion (of which long range
cameras provide) resulting in with absolute angular references(more
accuracy) and a larger field of view. This field of view
increases efficiency immensely. The reason is more of a view
the camera can get from a single screenshot, the less it needs to
move and recalibrate. This also results in less of a need to
correct errors between screenshots, resulting in once again, more
accuracy. Not with a second mono-vision camera providing stereo
vision with flat lenses, a robot is able to use its second camera as
a point of reference to calibrate itself to get a better feel for
depth. Now by far
the biggest benefit mono-vision cameras have is their superior visual
range to stereo cameras. Sola goes into this stating "The
drawback of stereo-based systems is a limited range of 3D
observability (the dense-fog effect: remote objects cannot be
considered), and that they strongly depend on precise calibrations to
be able to extend it.".
This means that although typical stereo cameras can look a decent
maximum distance, the preparation work for capturing a scene an
maximum distance is quite tedious. In Sola's writings he also goes
over how the identification of landmarks will occur by the camera
stating “As a
general idea, one can simply initialize landmarks following
mono-vision techniques from the first camera, and
then observe them from the
second one: we will determine their 3D positions with more or less
accuracy depending on if they are located inside or outside the
stereo observability region.”.
In layman's terms, this means the cameras have a slave master
relationship in regards to identifying landmarks. The master camera
identifies landmarks throughout
its field of view, followed by the second camera attempting to work
with the first camera to decide were the two are in regards to a 3d
environment. The
best part about these two cameras working together, is there is no
radial distortion(the distortion causes by long range cameras) to
skew the inputs. I see a bright future for Bi-mono-vision SLAM in
the future.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment