A peek inside Mobileye’s EyeQ5 – Part 2

Article By : Junko Yoshida

Mobileye's chief engineer sits down with EE Times to explain EyeQ5, the architecture of a driverless car in 2020, accelerators in the SoC and Google’s recently unveiled custom ASIC for machine learning.

Sensor fusion inside EyeQ5

Rushinek said that EyeQ5 was designed to support more than 16 cameras in addition to multiple radars and LIDARs, including the low-level processing of all sensors.

More at the technical level, he explained, “There are 16 virtual MIPI channels and more than 16 sensors can be supported by multiplexing several physical sensors on a single virtual MIPI channel.”

But what exactly are those cores inside EyeQ5 processing?

Rushinek explained, “The PMA (Programmable Macro Array) and VMP (Vector Microcode Processor) cores of EyeQ5 can run deep neural networks extremely efficiently, enabling low level support for any dense resolution sensor (cameras, next generation LIDARs and radars).”
He added, “FFT (Fast Fourier Transform) processing needed for radars and ultrasound could efficiently run on the VMP. We already have all radar processing run together with advanced camera processing on a single EyeQ3 (EyeQ5 is expected to be ~50-60 times stronger than EyeQ3) in production (e.g. Volvo CX90 and other Volvo models).”

What about power consumption?

“EyeQ5 accelerators were designed to optimise the performance per watt for machine learning and vision processing, enabling advanced processing of many sensors within a reasonable power consumption budget," Rushinek said.

Is EyeQ5 capable of decision making, too?

“EyeQ5 has advanced multi-threaded processing units (general purpose CPU and the MPC core) offering plenty of power for decision making and high level fusion,” Rushinek explained.

[block-diagram-EyeQ5]
__Figure 1:__ *EyeQ5 block diagram (Source: Mobileye) *

Do sensors send raw data to master ECU?

One thing that’s never clear is the issue of a “split” of hardware and software in processing sensory data. How much data analysis is each sensor expected to do before sending data to the master ECU? Or do we expect sensors to send “raw” data to the master ECU?

“There is no simple answer, since in reality there will be some level of low level processing done in the sensor in parallel to sending some of the low level information to the central ECU,” said Rushinek.

“Camera processing is the most computationally intensive, so any solution would include dedicated vision processing ECUs in addition to sending some of the raw data to the central ECU. Both EyeQ4 and EyeQ5 can support both goals — specific vision processing, as well as master ECU,” he added. “The EyeQ controls each sensor via I2C bus on a frame basis in order to get the optimised output, which is deeply aligned with real time algorithms.”

When Mobileye announced the EyeQ5, the company talked about fully programmable accelerators. Are you suggesting that these accelerators are based on a DSP? If so, is it Mobileye’s DSP or is it licensed from elsewhere?

Heterogeneous accelerators

Rushinek made it clear that the EyeQ5 accelerators aren’t a licensed IP. Instead, they “have been developed fully in-house for over 15 years, co-designed with Mobileye’s ADAS applications and SoCs, including hardware design, programming model and toolchain.”

EyeQ5 is built on previous accelerators used in Mobileye’s vision SoCs from EyeQ2 to EyeQ4 designed to enable advanced ADAS applications. Rushinek said that EyeQ5 “will feature the latest generation of accelerators found in Mobileye’s previous SoCs.”

The accelerators are heterogeneous, he said. “There's more than a single programming model involved.” However, common to all the accelerators is “they're programmed in a version of the C language extended with features mapping to their specific hardware architecture.” He explained that “this is true for most accelerators on the market.”

Mobileye’s heterogeneous accelerators are “optimised for a wide variety of computer vision, machine learning and signal processing tasks,” according to Rushinek. “We believe them to be a much better fit for the mix of computational tasks involved in ADAS and autonomous driving than mapping all those tasks to a pre-existing accelerator developed for another market and re-branded as an ‘autonomous driving solution’ while remaining the same thing underneath.”

Validated by Google’s TPU

In short, in his opinions, machine learning demands custom architecture.

Rushinek believes Mobileye’s approach is validated by Google’s recently announced “Tensor Processing Unit (TPU),” a custom ASIC built specifically for machine learning.

Rushinek said, “Google's TPU confirms our belief in one area – specifically neural networks – where Google too found a way to be more efficient than reusing existing products. We believe this is also true for the broader set of algorithms making up ADAS and AV.”

When Mobileye announced EyeQ5, the company casually mentioned its support for “an autonomous-grade standard operating system.” Exactly whose OS is Mobileye referring to?

Rushinek promised that EyeQ5 will support a standard automotive OS,” but he declined to name names. “At this stage we are still in negotiations with the potential vendors, so we can’t disclose names.”

He added, “EyeQ5 supports hardware virtualisation and CPU/accelerator cache coherence, which together facilitates integration of software from multiple suppliers, including several OSes running concurrently.”

EyeQ5 obviously won’t be the only chip inside a vehicle. It needs to other ECUs in cars. What sort of bus does it support and how is that bus protected?

Rushinek said, EyeQ5 supports a few standard communication channels: PCIe, SPI, Gigabit Ethernet, CAN-FD, UART. He noted, “All of the channels are secured via cyphering and authentication.”

We’ve known that ST/Mobileye team has used ST’s FD-SOI process technology for previous EyeQ series of vision SoCs. Asked about EyeQ4, Rushinek confirmed that EyeQ4 – currently sampling — is produced using 28nm FD-SOI.

However, the team will turn to FinFET — 10nm or below – for EyeQ5 chips. Noting that the EyeQ5 will contain eight multithreaded CPU cores coupled with eighteen cores of Mobileye's next-generation vision processors, Marco Monti, ST’s executive vice president, Automotive and Discrete Group, explained that this level of complexity has prompted ST to use the most advanced technology node available.

The ST/Mobileye team has not disclosed either the node (10nm or below) or the foundry partner.

Leave a comment