Inside A11 Bionic Chip | Apple’s Approach to Artificial Intelligence

Launched on September 12, iPhone X, 8 and 8+ will run on new A11 Bionic chip, yet again raising the performance bar. Every new iPhone version comes with slightly better hardware, cameras and faster processors. However, for the first time, Apple’s smartphone, iPhone X will feature a custom-built chip developed for handling artificial intelligence workloads.

Design

The A11 Bionic chip features 64-bit 6 core CPU, with 4 energy efficient core, named Mistral, and 2 high-performance core, named Monsoon. Unlike A10, all 6 cores could be utilized simultaneously, thanks to the second generation performance controller.

The A11 is also integrated with custom built 3 core GPU, which eliminates the need of Imagination Technologies’ hardware. This will save a lot of money as Apple was previously paying royalty fees per device for PowerVR architecture implementations in former processor designs.

A dedicated Neural Engine within the chip is capable to perform 600 billion operations per second. It will help the smartphone to run Face ID, Animoji, augmented reality applications, and perform other machine learning tasks without any glitch.

The A11 has a new M11 motion coprocessor – its job is to gather sensor data from integrated gyroscopes, compasses, accelerometers and offload the CPU from processing sensor data. Embedded in the A11 chip is a new image processor that supports advanced pixel processing, wide color capture and lightening estimation.

Manufactured by a Chinese company, Taiwan Semiconductor, the chip contains 4.3 billion transistors, using 10 nm fin Field-effect transistor process.

Performance

As per Apple report, the new CPU is 25% faster and GPU is 30% faster than A10 chip. They also claim that the energy efficient cores are 70% faster and the performance cores are 25% faster as compared to their counterparts.

According the Geekbench 4 score (actual data), the A11 chip is far ahead than other devices except Apple’s own iPad Pro 10.5 inch. Android devices like S8+ and Google Pixel are not even comparable.

The A11 chip is outpacing not only present A10 chip generation, but matching the performance of some MacBook pro 13-inch configurations (in single core processors).

The chip is specially tweaked to enhance photography experience – faster low light autofocus and less multiband noise. ‘It would be ideal for 3D games and applications, and would accelerate the games that use Metal 2 technology’, said Phill Schiller, senior VP of worldwide marketing.

The new Animoji feature uses front TrueDepth camera to track more than 50 muscle movements on the face to mimic user’s expressions in emoticons. Face ID also uses TrueDepth camera to project and analyze over 30,000 dots (invisible) to build a precise depth map of user’s face. These processing and capabilities are possible because of neural engine chip.

AI Chips By Other Companies

Not only Apple, many companies are customizing hardware to accommodate the requirement of artificial intelligence as it becomes more common in software. Recently, Microsoft revealed their AI chip for upcoming versions of HoloLens combined with reality headset. Google has already developed two generations of chips for handling computational workloads related to AI. Also, Huawei (a Chinese tech company) claims that their Kirin 970 can handle complex task like image recognition up to 20 times faster than conventional CPUs.

Read: Artificial Intelligence vs Machine Learning vs Deep Learning | The Difference

‘In the near future, we’ll see digital signal processors, especially developed for neural network inference and training’ said Dave Burke, Google’s Android chief at the G I/O conference 2017.

Leave a reply