A peek inside Mobileye’s EyeQ5 – Part 1

Article By : Junko Yoshida

Mobileye's chief engineer sits down with EE Times to explain EyeQ5, the architecture of a driverless car in 2020, accelerators in the SoC and Google’s recently unveiled custom ASIC for machine learning.

The month of May saw the launch of Mobileye's fifth-generation SoC called EyeQ5, and it was described by the company as the “central computer performing sensor fusion for self-driving cars in 2020.”

Obviously, the Israeli company wanted to make sure Tier Ones and car OEMs know about what Mobileye has in a few years from now. Perceiving the rush for driverless cars by automakers, Mobileye didn’t want the auto industry to make a hasty decision.

The timing of the Mobileye’s EyeQ5 announcement, however, raised a few analysts’ eyebrows.

In their view, Mobileye’s premature announcement of EyeQ5, was a pre-emptive strike on rival autonomous car platforms.

We caught up with Mobileye recently to find out in detail what’s inside EyeQ5, and what has driven Mobileye to devise an SoC with processing power of 12 Tera operations per second, while maintaining power consumption below 5W.

The EyeQ5’s claimed performance is nothing but impressive. The promised chip makes almost everything currently available on the ADAS market pale in comparison.

The hitch, though, is that the Mobileye SoC isn’t really sampling until 2018. Moreover, the company’s current SoC, EyeQ4, only began sampling in the first quarter of this year. EyeQ4 isn’t slated for volume production until 2018.

[mobileye-logo 200]

Rush for ‘driverless’ autonomous cars

So, why is Mobileye in such a hurry?

A Mobileye spokesman said the move reflects automakers’ growing impatience and an accelerated timetable for driverless autonomous vehicles.

12 to 18 months ago, automakers were more inclined to develop an autonomous car that allows a driver to take his mind off driving on the highway, according to Mobileye. That would be a Level 3 autonomous car – according to the SAE standard — defined as “within known, limited environments (such as freeways), the driver can safely turn their attention away from driving tasks.”

Now, automakers want autonomous cars that can operate without a driver – much sooner than later, according to Mobileye.

The concept of “shared” autonomous vehicles pursued by ride-share companies like Uber has triggered the change. Rather than sitting on the sideline and viewing the ride-share business model as a threat, automakers now want a piece of the action. They see ride-sharing as a “testbed” for upcoming autonomous cars. If the technology works, consumers might be more willing to buy into autonomous cars in the future.

In keeping up with its customers’ more aggressive timetable, Mobileye has also “slightly pulled forward” — to 2020 — its plan for EyeQ5.

But how does Mobileye picture the architecture of a car in 2020, compared to what we have today?

Vision processors — Mobileye's bailiwick

We asked Elchanan Rushinek, Mobileye’s senior vice president of Engineering, to decipher EyeQ5. Rushinek, a 16-year Mobileye veteran who heads up the silicon (hardware) design group, discussed topics ranging from architecture of accelerators used inside EyeQ5 to sensor fusion functionalities and Google’s recently announced TPU, a custom ASIC Google built specifically for machine learning.

[Elchanan 421]
__Figure 1:__ *Elchanan Rushinek, Mobileye’s senior vice president of Engineering*

In an email exchange, Rushinek responded to our questions about 2020 car architecture as follows: “There will be 360-degree coverage by cameras and ultrasonic. Part of the 360-degree coverage will be also covered by radar/LIDAR. There will also be some dedicated processors for specific sensors and central processors for fusion and decision making.”

He sees the vision processors – Mobileye’s bailiwick – bearing the responsibility of central processing.

Rushinek explained, “The nature of central processing is heavily around perception of the scene, which makes vision processors, like EyeQ4/5, the natural choice for the central processing.”

He added, “Most highly/fully autonomous driving solutions will demand full redundancy of the processors for safety reason, to the level of duplicating all processing chain.”

Mobileye’s strategy is rooted in the idea of making the most of the company’s EyeQ vision processors that are said to be already installed in approximately 10 million cars.

This year at the Consumer Electronics Show, Mobileye unveiled what it calls a Road Experience Management (REM) system designed to create “crowd-sourced real-time data” for precise localisation and high-definition lane data — an essential layer of information to support fully autonomous driving.

The technology is based on software running on Mobileye’s EyeQ processing chips. It extracts landmarks and roadway information at extremely low bandwidths — roughly10 kilobits (Kb) per kilometer of driving. (Google, in contrast, does its localisation and high-definition mapping at a Gigabit per kilometer.) Mobileye explained that backend software running on the cloud integrates into a global map the segments of data sent by all vehicles with the on-board software.

Mobileye’s visual interpretation scheme (which helps to compress data) will enable automakers to create their own “road book.”

So, Mobileye has a mapping strategy that leverages its EyeQ vision processors. But beyond maps, how much LiDAR, radar and ultrasound data – coming from no-vision sensors — can be processed by EyeQ5? Is it raw data that the EyeQ5 processes?

Find out about sensor fusion inside EyeQ5 in in part two of the article.

Leave a comment