AI and machine learning is having a transformative impact on robotics in numerous ways. While these technologies are still in their infancy, their rapid growth requires low-cost, robust sensors more than ever. To most of these applications, Lidars has been the central sensor. Most available lidars are quite expensive, and not robust for different conditions. Ouster provide lidars that are not only low-cost and available for researchers and engineers but of high quality
Lidar data has incredible benefits - rich spatial information and lighting agnostic sensing to name a couple - but it lacks the raw resolution and efficient array structure of camera images, and 3D point clouds are still more difficult to encode in a neural net or process with hardware acceleration. With the tradeoffs between both sensing modalities in mind, we set out to bring the best aspects of lidars and cameras together in a single device from the very beginning. The OS-1 now outputs fixed resolution depth images, signal images, and ambient images in real time, all without a camera.)
While RGB-D cameras and traditional flash lidars are also capable of outputting structured range data, neither class of sensors has comparable range, range resolution, field of view, or robustness in outdoor environments, compared to the Ouster OS-1. However, these shorter-range structured 3D cameras can still benefit from the work we’re doing, and we encourage manufacturers of these products to consider our approach.
Convolutional neural networks have recently pushed the limits in computer vision by use inherent grid structure in images. Unstructured lidar data has made it difficult to use convolutional neural networks. Majority of the current approaches are using preprocessing steps to project the unstructured data into a grid structure. The OS-1 sensor outputs fixed resolution image frames with depth, signal, and ambient data at each pixel; we’re able to feed these images directly into deep learning algorithms that were battle tested and developed to especially work with the structured data that is produced via camera. We get the best of both worlds; having the advantages of both 3D and 2D approaches without any sensor fusion or preprocessing.
Using this unique capability, we’ve worked with our labeling partners to take advantage of our structured data in their labeling tools in order to minimize the cost of labeling, increase their capabilities, and improve the accuracy of the annotation significantly.
I believe there is a lot of unexplored potential in machine learning. Deep learning has outrun the state of art in many areas and has been a far first runner in many tasks. These improvements in architecture, algorithms, and models, have been pushing the Machine learning limits but these are all impossible without having the abundance of useful and labeled data. Ouster has a mission of producing reliable, rugged, high resolution and low-cost sensors that produce unique data to unlock the new improvements in computer vision and perception.
Besides our achievements, Ouster would continue to improve using innovative solutions to produce better sensors to pave the way for research.