TI keeps head low in battle for robo-car supremacy

Article By : Junko Yoshida

The company’s financial results have demonstrated how well a modest strategy—focused more on Level 2 autonomous cars—has worked so far.

In the battle for glory as innovators of fully autonomous vehicle platforms, Texas Instruments is keeping its profile low.

It’s not that TI is indifferent to autonomy. It’s just that TI, one of the leading automotive chip suppliers, sees a different way to get there. Its plan is to use its current ADAS-focused platform to eventually enable Level 4, Level 5 autonomous car.

In a recent interview with EE Times, Brooke Williams, business manager in the automotive ADAS business unit at Texas Instruments, said TI has been actively participating in carmakers’ RFQs on models four to five years out. Some of the RFQs are for Level 4 and Level 5 autonomous cars. Others address ADAS features to achieve 5-star ratings. “We support all of their requests,” said Williams.

Above all, TI’s priority is responding to “needs for system-level safety across the board”—all cars, all models, according to Williams.

TI’s strengths lie in 30 years of ASIL D-level safety experience and a litany of TI technologies that include power management, analog devices, networking solutions such as LVDS and Ethernet, and sensors including radars, he said. The only automotive electronics devices TI doesn’t offer are CMOS image sensors and memory.

This “system-level safety” argument might seem to be just TI talking points. But the company’s financial results demonstrate how well a modest strategy—focused more on Level 2 autonomous cars—has worked so far. TI reported in April better-than-expected first quarter revenue growth driven primarily by strong sales to the automotive and industrial markets.

Phil Magney, founder and principal advisor for Vision Systems Intelligence (VSI), noted, “TI does not subscribe to massive architectural overhauls.” He pointed out, “For TI, it is all about incremental ADAS features which become the enablers to automation. TI is not concerned with L4 and L5 at the moment. In time their architectures will support advanced levels of automation but for now they are targeting automotive safety and convenience features because that is where the money is.”

‘No wholesale change in platform’

OK. So, today’s TI is all about ADAS.

But really, what are the plans, if any, for TI to shift its current ADAS platform to Level 4/Level 5 autonomy? During the interview, TI’s Williams noted, “We don’t believe a wholesale change in the platform is needed” to add autonomy to cars.

That view, however, has triggered a host of questions from automotive industry analysts.

Luca De Ambroggi, principal analyst for automotive electronics at IHS Markit, said, “It’s not clear to me what TI wants to address with their [current ADAS] solution.” If TI is targeting mainly machine vision, he said, “I can understand their approach.” But if TI wants to use the same platform for L4 autonomous cars, “You might need to over-engineer a lot the ‘L2’ systems to support the ‘L4’ cases.”

Mike Demler, a senior analyst at the Linley Group, agreed. He pointed out, “There’s just no way that a system designed only for dual-function (L2) ADAS, such as lane-keeping assist, can handle L4 autonomous driving. For example, the L2 ADAS doesn’t employ neural-network training, but that’s a requirement for L4. Both architectures could employ DSP architectures, but the hardware capabilities and software stack will be significantly different. If you look at the evolution of Mobileye’s EyeQ processors, there isn’t a ‘wholesale’ change in architecture, but the EyeQ5 has much higher performance and more capabilities than EyeQ2. Same goes for systems employing Nvidia GPUs.”

TI’s Williams, however, insisted that TI is ready to bring deep learning to its TDAx platform by leveraging its heterogeneous hardware architecture.

He cited a public demo at the Consumer Electronics Show earlier this year. TI demonstrated deep learning-based semantic segmentation running on its neural network implementation on TDA SoCs.

The idea, according to Williams, is to task TDA SoC’s EVE cores to run complex neural network algorithms, while dedicating C66 DSP cores for traditional machine-vision algorithms.

TI_TDAx-deep-learning _01 (cr)
Figure 1: TDAx Deep Learning Translation & Partitioning Flow (Source: TI)

TI's AI strategy

Asked about TI’s AI strategy, Magney told us, “We first learned of TI’s activities at CES and were impressed with their goal of supporting AI modules on low-powered TDA2 devices.”

He called this move “pretty shrewd.” It allows developers the chance to build out their algorithms using popular AI frameworks.

TDA2x SoC, currently on the market, is integrated with two ARM Cortex-A15 cores, four ARM Cortex-M4 cores, two C66x DSPs and four EVEs. Its applications include front camera, surround view/record and fusion systems.

If indeed TI is upgrading its SoC solution to include deep learning as shown in the demo at CES, Magney sees TI’s secret sauce as “their network translator which optimises the inference model for the target processor and TI’s deep learning libraries.”

DSP & EVE split

Williams told EE Times that TI is not depending on “a bigger hammer like GPU” to run neural network algorithms. TI’s EVE (Embedded Vision Engine) turns out to be very effective for deep learning.

Magney explained that TI “uses DSP architectures to handle double precision (64-bit) floating point while the EVE uses single precision (32-bit) floating point so it all depends on how your algorithms are written.” Magney added, “EVE happens to be well suited for running an AI inference model because of its data parallelism and unique memory architecture. It can handle the many layers of the inference model and process it at very low power.”

In comparing DSP with EVE, Magney noted, “Using double precision floating-point variables and mathematical functions are slower than working with their single precision 32-bit counterparts but the precision is greater. The extra bits increase not only the precision but also the range of magnitudes that can be represented.”

Naturally, the processor architectures required depend on solving a problem, Demler said. “For deep-learning CNNs, you need a highly parallel architecture. Whether DSP-centric, GPU-centric, or specialised accelerators, they will all implement highly-parallel processing. TI just happens to call their engine EVE. Nvidia has Cuda and DLA, Movidius is Shave, etc. Mobileye’s EyeQ architecture integrates multiple specialized cores, and some of them will be similar to EVE.”

De Ambroggi believes it makes sense for TI to pursue a hybrid model, using DSP for traditional vision algorithm and EVE for deep learning—largely due to “redundancy and safety reasons.”

Expressing his skepticism that an AI-only-based solution can earn ASIL certification, De Ambroggi noted. In his opinion, “AI is not safe enough” yet. Furthermore, “you need to split the cores because of optimisation,” he added. For the time being, “traditional” machine vision algorithms will still “take” the decision in the car—for most of the OEMs, he suspects, except for perhaps Tesla.

TI_TDAx-deep-learning _02 (cr)
Figure 2: Deep Learning on TDA: Semantic Segmentation Demo (Source: TI)

TDAx Next

It turns out that TI has a new SoC in the works, dubbed for now “TDAx Next.” The company has not announced it, and Williams wouldn’t discuss specifics. But during the interview, he hinted that the upcoming TDAx Next will enable autonomous systems from Levels 2-5. TI declined to comment on when it will be ready for the market.

Williams reiterated that TI’s strategy is to protect the software investment made by carmakers and Tier Ones, and allowing their software to migrate from Level 2 cars to highly automated cars.

Williams also pointed out that car OEMs and Tier Ones differ on their favoured architecture for autonomous cars. Their preferences are currently all over the map—ranging from an edge-processing model (with most sensor processing on the edge) to complete central-sensor fusion or a hybrid approach (pre-processing on the edge and post processing at the centre). TI hopes to respond by keeping its solutions as flexible as possible, he said.

Training vs. Inference

In deep learning, TI clearly sees its place as offering solutions for inference engines, rather than providing chips for the training side. This approach begs the question of whether there is any inherent advantage in keeping the same platform (Nvidia’s Xavier; Google’s TPU) for both the learning and inference sides of deep learning.

Demler called the question “timely.” He recently wrote an article describing Nvidia’s TensorRT-CNN conversion process. “It’s critical for suppliers of inference engines to provide tools for converting trained networks, and some of the CNN-IP vendors have also done this.”

Demler explained, “Technically, if a developer is translating a pre-trained network model such as Googlenet or Resnet, the dataset they use to calibrate the inference engine is more critical than the system upon which the network was originally trained. That being said, however, for developers creating both the training and inference networks, it would probably be easier to work with an integrated set of tools, although it’s not a technical requirement.”

Meanwhile, Magney maintained: “I don’t think there is any inherent advantage in training on the same architecture that you run the inference model on, particularly if you are using something like OpenVX.” In his opinion, “Soon it may not matter as you will have lots of choices for training architectures that are offered as a service from major cloud companies.”

However, “It does matter if you training on Nvidia but deploy on TI. It is the conversion tool (which is like a compiler) that optimises the inference model for the target processor.”

TI_TDAx-deep-learning _03 (cr)
Figure 3: TI's Deep Learning Framework (Source: TI)

How big is L2, L3 market?

Many analysts agree that the market for L2, L3 cars is where the money is today. If so, how big is the market, and how long will it last?

Demler said, “Annual passenger vehicle sales are roughly 90 million per year. Current L2 penetration is very low (L3 is zero), approximately 10% of 2016 new car shipments.”

The Linley Group estimates growth [of ADAS cars] to 40% growth in the next five years, or around 30-40 million in 2022.

Demler, however, said, “It’s important to not lump L2 and L3 together. Those are very different, and it’s still doubtful how many carmakers will launch L3 systems, which require fallback to a human driver.”

The future of L3 remains unclear. De Ambroggi, agreeing with Demler, told EE Times that he is also sceptical if the issue of the driver taking back control can be properly addressed in L3 autonomy. IHS has not publicly made available the forecast for L2 and L3 cars, he said. “We are still investigating.”

Magney said, “I think L2/L3 is going to be a hot market for a long time.” Despite high hopes for the emergence of Mobility as a service applications, he cautioned, “For the next couple of decades, cars will largely be sold to individual buyers like they have been.”

First published by EE Times U.S.

Leave a comment