The iPhone X’s notch is basically a Kinect – The Verge

Sometimes it’s hard to tell exactly how fast technology is moving. “We put a man on the moon using the computing power of a handheld calculator,” as Richard Hendricks reminds us in Silicon Valley. In 2017, I use my pocket supercomputer of a phone to tweet with brands.

But Apple’s iPhone X provides a nice little illustration of how sensor and processing technology has evolved in the past decade. In June 2009, Microsoft unveiled this:


In September 2017, Apple put all that tech in this:


Well, minus the tilt motor.

Microsoft’s original Kinect hardware was powered by a little-known Israeli company called PrimeSense. PrimeSense pioneered the technology of projecting a grid of infrared dots onto a scene, then detecting them with an IR camera and acsertaining depth information through a special processing chip.

The output of the Kinect was a 320 x 240 depth map with 2,048 levels of sensitivity (distinct depths), based on the 30,000-ish laser dots the IR projector blasted onto the scene in a proprietary speckle pattern.


Futurepicture

In its day, the Kinect was the fastest selling consumer electronics device of all time, while it was also widely regarded as a flop for gaming. But the revolutionary depth-sensing tech ended up being a huge boost for robotics and machine vision.

In 2013, Apple bought PrimeSense. Depth cameras continued to evolve: Kinect 2.0 for the Xbox One replaced PrimeSense technology with Microsoft’s own tech and had much higher accuracy and resolution. It could recognize faces and even detect a player’s heart rate. Meanwhile, Intel also built its own depth sensor, Intel RealSense, and in 2015 worked with Microsoft to power Windows Hello. In 2016, Lenovo launched the Phab 2 Pro, the first phone to carry Google’s Tango technology for augmented reality and machine vision, which is also based on infrared depth detection.


And now, in late 2017, Apple is going to sell a phone with a front-facing depth camera. Unlike the original Kinect, which was built to track motion in a whole living room, the sensor is primarily designed for scanning faces and powers Apple’s Face ID feature. Apple’s “TrueDepth” camera blasts “more than 30,000 invisible dots” and can create incredibly detailed scans of a human face. In fact, while Apple’s Animoji feature is impressive, the developer API behind it is even wilder: Apple generates, in real time, a full animated 3D mesh of your face, while also approximating your face’s lighting conditions to improve the realism of AR applications.

PrimeSense was never solely responsible for the technology in Microsoft’s Kinect — as evidenced by the huge improvements Microsoft made to Kinect 2.0 on its own — and it’s also obvious that Apple is doing plenty of new software and processing work on top of this hardware. But the basic idea of the Kinect is unchanged. And now it’s in a tiny notch on the front of a $999 iPhone.

Comments

Write a Reply or Comment:

Your email address will not be published.*