Proceedings of the
9th International Conference of Asian Society for Precision Engineering and Nanotechnology (ASPEN2022)
15 – 18 November 2022, Singapore
doi:10.3850/978-981-18-6021-8_OR-02-0232

Accurate All-round 3D Measurement from Trinocular 360-degree Stereo Cameras via Geometric Optimization of Image Texture and Color

Takumi Hamadaa, Sarthak Pathak and Kazunori Umeda

Department of Precision Mechanical Engineering, Chuo University, 1-13-27, Kasuga Bunkyoku, Tokyo, 112-8551, Japan

ABSTRACT

Cameras are often used for Three-dimensional (3D) measurement in robotics and other applications. 3D reconstruction with cameras is useful in applications such as mapping at disaster sites that are inaccessible to humans and infrastructure inspections. Robots can move around in such locations and capture 3D data from multiple points of view. However, such data obtained from a moving robot needs to be integrated to obtain the reconstruction of the entire environment. This process is tedious, time consuming, and inaccurate. For this reason, we focus on a method that can accurately measure the environment in all directions at once.
Binocular stereo vision is one of the most common methods for 3D reconstruction. A stereo camera obtains distance information by using the disparity between images captured by two cameras at different viewpoints. With ordinary cameras, it is difficult to measure all directions at once because the field of view is limited. On the other hand, 360-degree cameras can capture all-round environment. However, it is practically difficult to achieve highly accurate all-round 3D measurement with binocular 360-degree stereo vision. This is because the accuracy in the epipolar direction, i.e., the direction along the two cameras, is extremely low.
In this research, we propose a method to increase the accuracy of all-round 3D Measurement using the principle of stereo cameras based on images obtained from Trinocular spherical stereo cameras. By placing a third camera at 90 degrees to two other cameras, it is possible to recover the loss of accuracy along epipolar directions. In this research, we aim to obtain high accuracy by using geometric and photometric constraints. We optimize each 3D point in the environment to satisfy the epipolar constraints of each camera pair, and to have the same intensity in each image. The accuracy is improved by weighting based on the confidence level and optimizing the reprojection error. We also account for the error in calculating stereo disparity by considering image gradient information in the optimization. In short, we reproject the measured points of the environment on all three cameras and minimize the reprojection error in a geometric manner, while considering the color and gradient information obtained from the images. Experimentally, we investigate the improvement in accuracy by three methods:
1. Geometric constraints - consistency of the projection of a 3D point in the environment onto three cameras
2. Color information - consistency of color information projected onto the three cameras
3. Reliability of disparity calculation by optical flow based on gradient information in the image

Keywords: Image processing, Spherical camera, 3D reconstruction, Stereo.



PDF Download