Christian Mourad | VoxelSensors: How can you get a new 3D depth point every 10 nanoseconds without processing a full image frame?
00:04:52 - 00:06:36
Other snippets from this talk
Summary of the clip:
How can you get a new 3D depth point every 10 nanoseconds without processing a full image frame?
VoxelSensors' system architecture is based on active triangulation. It uses a laser beam scanner (LBS) with a 2D MEMS mirror to project a single, moving laser dot onto the scene. This dot follows a continuous pattern, such as a Lissajous figure, to actively and sequentially illuminate the environment, concentrating all optical power into one small point at any given time.
The core of the system is the proprietary Single Photon Active Event Sensor (SPAS). This sensor features an array of Single-Photon Avalanche Diodes (SPADs), granting it extreme sensitivity to detect the faint, reflected laser dot. The sensor's key capability is its incredible speed; it can determine the dot's precise (X,Y) location on its image plane at a rate of 100 MHz, which translates to a new position measurement every 10 nanoseconds.
This high-speed detection enables a continuous triangulation process. The system knows the exact projection angle of the laser from the MEMS mirror at every instant. By combining this known angle with the dot's detected position on the SPAS sensor and the fixed physical baseline between the projector and sensor, a simple triangulation calculation yields a 3D depth point. This process repeats every 10 nanoseconds, progressively building a dense point cloud without ever capturing or processing a conventional image frame.
In this short video, you can learn:
* The three core components of the system: a laser beam scanner, a SPAS sensor, and a triangulation algorithm.
* How the SPAS sensor uses SPADs to achieve 100 MHz detection speed, enabling a new depth point every 10 nanoseconds.
* The principle of continuous triangulation that avoids the latency and power overhead of traditional frame-based depth sensing.
π **Clip Abstract** VoxelSensors' technology redefines 3D sensing by combining a fast MEMS-based laser scanner with a novel Single Photon Active Event Sensor (SPAS). This architecture enables continuous triangulation, generating a new 3D depth point every 10 nanoseconds for an ultra-low latency, frame-free point cloud.
π Link in comments π
#ActiveTriangulation, #MEMSLaserScanner, #SPADSensor, #HighSpeed3DSensing, #AugmentedReality, #WearableElectronics
This is a highlight of the presentation:
VoxelSensors develops advanced sensing technology for Spatial and Empathic interfaces in Mobile, XR, and industrial applications.
MicroLEDs, AR/VR Displays, Micro-Optics 2025: Innovations, Start-Ups, Market Trends
Online | TechBlick platform
Organised By:
TechBlick
MicroLED Connect
More Highlights from the same talk.
00:08:37 - 00:11:20
How do you detect a single laser dot when it's lost in a sea of ambient photons, even direct sunlight?
How do you detect a single laser dot when it's lost in a sea of ambient photons, even direct sunlight?
The primary challenge of using highly sensitive Single-Photon Avalanche Diodes (SPADs) is their susceptibility to ambient light. In bright conditions, a SPAD array is flooded with triggers from sunlight and its own thermal noise, creating a "sea of photons" that makes it nearly impossible to distinguish the signal from the active laser illumination. This presents a massive signal-to-noise ratio problem that must be solved directly on the sensor.
VoxelSensors' key innovation is an in-pixel spatial and temporal filtering unit. This sophisticated circuitry, located directly within each pixel, is designed to discriminate the laser dot from spurious background events. The filtering logic leverages known characteristics of the signal, such as the specific geometry (shape and size) of the laser dot as it appears on the sensor and the physical constraint that its movement follows a continuous, predictable trajectory.
The filtering process is multi-staged for maximum effectiveness. After the initial in-pixel filtering rejects the majority of noise, a secondary on-chip event stream filter further refines the data, enhancing the signal-to-noise ratio. The final, clean output is an event-based data stream in an Address Event Representation (AER) format, which provides the X coordinate, Y coordinate, and a precise timestamp (T) of only the validated laser dot detections. This "active event" approach ensures that only relevant data is processed and transmitted, dramatically reducing bandwidth and subsequent computational load.
In this short video, you can learn:
* The fundamental challenge of using SPADs in high ambient light conditions.
* The role of the in-pixel spatial and temporal filtering unit in discriminating the laser dot based on its geometry and trajectory.
* The multi-stage filtering process that results in a clean, event-based (XYT) data stream of only the active illumination.
π **Clip Abstract** This clip details the critical challenge of using ultra-sensitive SPAD sensors in bright sunlight and VoxelSensors' solution. Their technology employs a sophisticated multi-stage filtering architecture, including a unique in-pixel spatial and temporal filter, to isolate the active laser signal from overwhelming ambient noise.
π Link in comments π
#SPADArrays, #InPixelFiltering, #EventBasedSensing, #SignalToNoiseEnhancement, #ARVRInteraction, #3DSensing
00:11:48 - 00:13:21
How can you slash sensor power consumption by over 90% using a simple geometric constraint?
How can you slash sensor power consumption by over 90% using a simple geometric constraint?
Even with advanced filtering, a sensor array where every pixel is active will still generate numerous noise-related SPAD activations in bright light, consuming significant power. VoxelSensors leverages a key characteristic of its triangulation system to overcome this. Because the laser projector and the sensor are mounted on the same horizontal axis in an AR headset, the disparityβthe difference in position between the projected dot and the detected dotβoccurs almost exclusively in the horizontal (X) direction.
This geometric principle, known as the epipolar constraint, means the system knows precisely which horizontal line of pixels the laser dot will appear on at any given moment as it scans the scene. Instead of keeping the entire sensor array powered on, the system can dynamically activate only a narrow, horizontal band of pixels corresponding to the expected location of the laser reflection. All other pixels across the sensor remain in a low-power, inactive state.
This "active region of interest" strategy dramatically reduces the number of pixels susceptible to ambient light, which in turn slashes the number of noise-induced SPAD activations and the associated data processing. This leads to a massive reduction in both data noise and, more importantly, the sensor's overall power consumption. This system-level optimization is critical for achieving the sub-75 milliwatt power budget required for all-day, always-on perception in AR glasses.
In this short video, you can learn:
* The concept of the epipolar constraint in a stereo or triangulation vision system.
* How VoxelSensors uses this constraint to activate only a small, relevant portion of the sensor array at any time.
* The massive impact of this "active region of interest" strategy on reducing both noise and power consumption.
π **Clip Abstract** Discover a clever power-saving technique that exploits the geometry of triangulation systems. By activating only a narrow horizontal band of pixels where the laser dot is expected to appear, VoxelSensors dramatically reduces noise and power consumption, making its technology viable for always-on AR applications.
π Link in comments π
#EpipolarConstraint, #ActiveRegionOfInterest, #SPADSensors, #LowPowerARPerception, #AugmentedReality, #WearableElectronics




