iDAR™ is more than LiDAR.
It’s a groundbreaking form of artificial perception that fuses LiDAR, computer vision and artificial intelligence to deliver unprecedented advancements in perception and motion planning for ADAS and autonomous vehicles.
Unlike first generation LiDAR technologies, whose siloed sensors, rigid asymmetrical data collection methods, and post-processing lead to over and under-sampling of information and latency, iDAR optimizes data collection, enabling it to transfer less data, of higher quality and relevance, for rapid perception and path-planning.
AEye’s iDAR combines the world’s first agile MOEMS LiDAR, pre-fused with a low-light camera and embedded AI to create software-definable and extensible hardware that can dynamically adapt to real-time demands. By enabling intelligent prioritization and interrogation, AEye’s iDAR can target and identify objects within a scene 10-20x more effectively than LiDAR-only products.
iDAR delivers higher accuracy, longer range, and more intelligent information to optimize path planning software, enabling radically improved autonomous vehicle safety and performance at a reduced cost.
AEye’s iDAR mimics how a human’s visual cortex focuses on and evaluates potential driving hazards: it uses a distributed architecture and at-the-edge processing to dynamically track targets and objects of interest, while always critically assessing general surroundings. That enables accessible direct detection for every pixel and voxel in each frame.
iDAR’s True color LiDAR (TCL) instantaneously overlays 2D real-world color on 3D data, adding computer vision intelligence to 3D point clouds. By enabling absolute color and distance segmentation, and co-location with no registration processing,TCL enables the quick, accurate interpretation of signage, emergency warning lights, brake versus reverse lights, and other scenarios that have historically been tricky for legacy LiDAR-based systems to navigate.
Software-Definable and Extensible
iDAR adds three feedback loops that do not exist today: one at the sensor layer, one at the perception layer, and another with path planning software. By enabling customizable data collection in real-time, the system is able to adapt to the environment and dynamically change performance based on the customer’s/host’s applications and needs. In addition, it can emulate legacy systems, define regions of interest, focus on threat detection, and/or be programmed for variable environments, such as highway or city driving. This new form of intelligent data collection enables radically improved autonomous vehicles safety and performance, at a reduced cost.