A popular way to map out landmarks in SLAM is using two long range cameras to capture images of a scene. Now part of SLAM involves reconstructing the environment seen in these images and rebuilding it in a 3D environment containing objects. When a long range camera takes pictures, the lenses shift to allow a farther distance to be seen. Now there is an issue with this, namely the pictures captured are slightly warped by the lens by differing degrees depending on the distance. When attempts are made to reconstruct what was captured, severe fragmentation is observed in several parts of the model. Zhou and Koltun introduce their attempt at a solution in their paper on
SLAC stating "
This compact parameterization enables extremely efficient simultaneous localization and calibration (SLAC). As a result, our approach can reconstruct real-world scenes while estimating and correcting for range distortion in real time". This basically means they want their algorithm to allow an undistorted version of the image to be derived from the long range image given in order to properly map out an area without strange looking fragments appearing. Another thing they state in their abstract is what makes their approach different from others stating, "
Our approach directly recovers a camera trajectory alongside the distortion model. This distinguishes it from reconstruction approaches that simply deform the input data without performing distortion estimation and camera localization [14]." This means unlike typical means of correcting which warp the scene as soon as it comes in, their algorithm attempts to use the distortion data to better decide the location of the autonomous robot before correcting the image. These robots cant simply store all of these pictures after they store them, due to memory and speed requirements. These must take the data from a landmark, put it into a compact form, compare it to all their compacted data from their old landmarks to keep a map of an area as accurate as possible. Theoretically they could store all of this information, but in order to keep SLAM running in real time the memory would cost a fortune and in the end most of the information wasn't necessary.
Contained on page 2 of
Zhou and Koltuns' paper there are 3 pictures of a car being 3D reconstructed using SLAM data. The first is being done without their
SLAC algorithm running. The second is being done in real time(online). The third is done with no time constraints(offline), resulting in a slightly better rendering than the second, yet both are much better than the first and have no major deformities. The biggest thing no notice is the closeness of quality between the offline and the online versions. This is what SLAM developers strive for, the ability to to things as quickly as possible with nearly the same accuracy as a system with no time and computing restraints.
Zhou,
Q. Y., & Koltun, V. Simultaneous Localization and Calibration:
Self-Calibration of Consumer Depth Cameras.
2014
[14] Q.-Y. Zhou, S. Miller, and V. Koltun. Elastic fragments for
dense scene reconstruction. In ICCV, 2013.
No comments:
Post a Comment