- Sensor data recognition
- Media recognition
3D reconstruction/self-positioning estimate
Recover 3D shapes from images, and estimate position.
- Using our 3D reconstruction and self-positioning estimate technology, take detailed measurements of 3D shapes using multiple images and accurately estimate image capture position at separate times.
- Expected to be used in various scenarios, including autonomous movement of automobile and drones, and infrastructure/product inspections.
Applications
- Autonomous movement of vehicles, drones, and robots
- Automation of maintenance and inspection services
- Product defect inspections
- Other 3D measurement technologies
Benchmarks, strengths, and track record
- 3D reconstruction: Achieved world’s best accuracy in public data sets for onboard images (as of October 2016).
- Self-positioning estimates: Accuracy of 4 cm indoors and 25 cm outdoors. Can even be used in environments where GPS is not accessible (e.g., indoors).
- Accepted by top international conference in the image recognition field.
- Computer Vision and Pattern Recognition (CVPR) 2017
- International Conference on 3D Vision (3DV) 2019
- International Conference on Robotics and Automation (ICRA) 2020
Inquiries
Please include the title “Toshiba AI Technology Catalog: 3D reconstruction/self-positioning estimate” or the URL in the inquiry text.
Please note that because this technology is currently the subject of R&D activities, immediate responses to inquiries may not be possible.
References:
- Akihito Seki et.al.; “Detailed, real-time 3D reconstruction technology using video from a single lens camera”; Vol. 68, No. 5, pp 40-43, 2013.
- A. Seki and M. Pollefeys, “SGM-Nets: Semi-Global Matching With Neural Networks”, CVPR 2017.
- R. Nakashima and A. Seki, “SIR-Net: Scene Independent End-to-End Trainable Visual Relocalizer,” 3DV, 2019.
- R. Nakashima and A. Seki, “Uncertainty-based Adaptive Sensor Fusion for Visual-Inertial Odometry under Various Motion Characteristics,” ICRA, 2020.