Apple expands its 3D mapping technology with longer range “multi-sensor” depth mapping

Apple used its 3D mapping technology for Face ID with the iPhone X through its 2013 acquisition of Israeli company PrimeSense. In 2015, an issued patent, originally from PrimeSense, illustrated a projector system for a much larger screen beyond Face ID. PrimeSense technology was originally used behind the Xbox Kinect as a motion controller. Over the years Apple has refined this 3D depth mapping technology with one of the patents covered in our 2021 report entitled “Apple Invents Enhanced Depth Mapping using Visual Inertial Odometry for iPhone and/or a Tabletop Device”.

Today, the US Patent & Trademark Office has filed a patent application from Apple entitled “Multi-sensor depth mapping,” which continues to advance its 3D mapping technology. Apple states that “the present invention relates generally to systems and methods for depth mapping, and in particular to improve the accuracy of depth maps.” More specifically, Apple states that their invention aims to provide depth mapping beyond short distances, such as B. Face ID, which uses a disparity-based depth mapping system.

According to Apple, ToF-based depth mapping systems are more accurate at longer ranges and less susceptible to mechanical and thermal effects than disparity-based depth measurements. However, short-range ToF measurements can be severely affected by small discrepancies between the timing of photon transmission and the time-sensitive signals generated in response to photon arrival. Furthermore, while disparity-based depth-mapping systems can use standard image sensors with small spacing and high transverse resolution, ToF systems typically require special-purpose radiation sources and range sensors with inherently lower resolution.

Embodiments of the present invention provide depth mapping systems that combine the high lateral resolution of disparity-based depth sensors with the high depth accuracy of ToF-based range sensors.

These systems use the accurate depth measurements made by a ToF sensor to generate a disparity correction function, which is then applied to improve the accuracy of disparity-based depth measurements made by a patterned light or stereoscopic depth sensor. This disparity correction is particularly important at longer measurement distances and when compensating for calibration losses due to factors such as mechanical shock and environmental conditions. In some embodiments, the disparity-corrected depth measurements performed by the disparity-based depth sensor are also used to calculate a range correction function that can be used to improve the accuracy of the longer range depth measurements provided by the ToF sensor.

In the disclosed embodiments, an illumination assembly directs modulated optical radiation onto a target scene. For ToF detection purposes, the radiation is modulated in time, for example in the form of short pulses for direct ToF detection or carrier wave modulation for indirect ToF detection. In addition, the radiation can be spatially modulated to project a pattern of structured light for disparity-based detection. Based on the temporal modulation, a range sensor detects respective times of flight of photons reflected from a matrix of locations arranged over the target scene. For disparity-based detection, a camera captures a two-dimensional image of the target scene.

Apple’s patent FIG. 1 below is a schematic pictorial representation of a depth mapping system; COWARDLY. 2 is a schematic side view of the depth mapping system of FIG. 1.

2 new depth camera Apple patent - 11/17/2022 - Patently Apple report

See Apple’s patent application US 20220364849 A1 for more details.

Apple’s patent refused to specify which product range advanced 3D depth mapping is intended for. While the invention could be applied to a future iPhone, patent FIG. 1 is not an iPhone. Could Apple be hinting at a future high-end Apple TV box that will support Apple Fitness+ and interactive gaming, or something entirely new?


Shay Yosub: Technical Director, Depth Hardware

Assaf Avraham: System Manager (by PrimeSense)

Joe Nawasra Ph.D: Technical Lead, Camera Hardware Design

Jonathan Pokrass: Algorithm Manager

Moshe Laifenfeld: Manager Depth Detection Algorithm

Niv Gilboa: Electro-Optics Hardware Manager

Tal Kaitz: Algorithm team leader

akerman; Ronen: System technology team leader

Naveh Levanon: Image Processing Engineer

10.51FX - Patent Application Bar


Leave a Reply

Your email address will not be published. Required fields are marked *