- Arm has announced Project Trillium that features next-generation machine learning and object detection processor, and neural network software.
- It will target a wide range of devices, from mobile and home entertainment to sensors and data centers, and beyond.
Arm, a multinational semiconductor and software design company, announced a new generation machine learning platform, named Project Trillium. It’s specifically developed for machine learning and neural network capabilities that can be scaled to any device, from servers to connected cars.
The demand of artificial intelligence is growing enormously, and so the requirement for innovation to address large computations while maintaining a power efficient footprint. The company has launched this platform to provide a wide range of devices with high-degree of flexibility and scalability.
The machine learning technologies we have today are focused on only a specific class of device, which needs to be changed. Although, project Trillium’s initial focus would be on mobile processors, future products will offer the flexibility to move up the performance curve – from smart speakers, home entertainment to sensors, and beyond.
Arm is the market dominant for mobile phone and tablet processors. Their Mali line of GPUs are used in laptops, more than 50 percent of Android tablets, and a few versions of Samsung’s smartwatches and smartphones. And yes, it’s the 3rd most popular GPU in mobile platform.
The core designs of Arm are used in chips that support various common network technologies in smartphones, like broadband, WiFi and Bluetooth. Their main competitors are AMD, Qualcomm, Nvidia, and of course Intel. As of 2016, the total assets of the company was $3.21.
The New Machine Learning Processor
Along with a huge massive efficiency uplift, Arm’s heterogenous machine learning platform far exceeds conventional logic from digital signal processors. According to the company, the mobile processor can perform over 4.6 trillion operations per second, allowing it to deliver 2 to 4 times effective throughput in real-world applications through smart data management.
These new processors feature unmatched performance in cost-constrained and thermal environment with an efficiency of 3 trillion operations per second watt. Moreover, they have programmable layer engines for futureproofing, and are highly configurable for advanced geometry implementations.
The Arm Object Detection processor, on the other hand, is specifically developed to detect people and objects with virtually countless objects per frame. It provides real-time detection with full High-Definition processing at 60 frames per second – up to 80 times better performance than conventional processors.
The Object Detection processor features a detailed people model, which offers rich metadata and allows detection of trajectory, direction, pose and gesture. It streams kilobytes-size data, decreasing bandwidth to the cloud and enabling aggregating of thousands of streams per server.
Overall, both these processors deliver a high-performance, efficient object detection and recognition solution in battery-friendly way.
Neural Network Software
Arm Neural Network software bridges the gap between existing neural network frameworks (like Caffe, TensorFlow, Android NN) and the full range of Arm Cortex CPUs, Mail GPUs and machine learning processors.
Simply put, it’s open source Linux software and tools that enables machine learning workloads on power-efficient devices. Developers will be able to fully utilize the underlying Arm hardware performance and capabilities, to achieve highest performance from machine learning applications.
The new suite of Arm machine learning IP will be available for early preview in April, and it will be open for general availability in the mid of 2018.