Compute library runs on ARMv7, ARMv8 CPUs

Article By : Peter Clarke

ARM library's performance varies depending on core implementation, the amount of support for machine learning and the number of cores.

« Previously: ARM library covers machine learning frameworks
 

Upon its launch at the MWC 2017, ARM is claiming that, in a given test, improvements of about 14 or 15x can be achieved running routines from the Compute Library when compared with running OpenCV on Neon.

OpenCV (Open Source Computer Vision Library) is an open-source computer vision and machine learning software library. Neon is the 128bit SIMD (Single Instruction, Multiple Data) architecture extension for the Cortex-A series processors.

 
[armmachinelib features (cr)]
Figure 1: ARM Compute Library information from Mobile World Congress demo.
 

The ARM Compute Library runs on any ARMv7 and ARMv8 CPU and any Mali Midgard and Bifrost GPU, an ARM spokesperson said. The performance varies depending on core implementation, the amount of support for ML and the number of cores. Both single and multicore processing are supported although it remains unclear whether heterogeneous computation is supported.

Certainly, Qualcomm has spoken about machine learning support within its Snapdragon line of application processors. In 2016, it came up with an SDK for neural network software that piggybacks on the existing Kyro CPU, Adreno GPU and Hexagon DSP cores inside the Snapdragon 820 processor. Meanwhile plenty of start-ups are claiming to have been working on best-in-class hardware for machine learning. Besides the likes of Synopys, Cadence and Ceva offering machine learning support, there are such start-ups as TeraDeep, Graphcore, BrainChip and KnuEdge with their own machine learning processors.

First published by EE Times Europe.

 
« Previously: ARM library covers machine learning frameworks

Leave a comment