Time-tested car safety approaches

Article By : Junko Yoshida

The DoT document doesn't address problems as what happens if an autonomous vehicle fails to activate its fall-back strategy, according to an expert.

« Previously: Independence of automotive safety assessment
 

Philip Koopman, a professor at Carnegie Mellon University, is adamant that the Federal Autonomous Vehicle Policy should be written without the loopholes that corporations use to evade re‐assessment of safety critical functions.

More specifically, he said that policy should promote the use of a “time-tested safety approach” such as:

  • Every time you touch safety code, every change, no matter how minor, should trigger a new safety evaluation.
  • Modularise safety arguments by exploiting strong isolation mechanisms between system modules.
  • Partition each system so that safety can be judged on a modular basis, and segregate unlikely‐to‐change functions from rapidly changing functions to reduce the safety assessment burden.

Fall-back trigger

Koopman believes DoT document is not explicit enough to address such problems as what happens if the highly autonomous vehicle system itself is having a problem that fails to activate its fall-back strategy.

Koopman asked, “If the autonomy system is brain-dead, do you have a backup system?” Or, “what if the autonomy system is lying to the back-up system?”

For example, he said, “a self‐diagnosis bug can cause missed problems. Such a self‐reporting diagnosis failure could render the vehicle unsafe.” He also added, “Relying on an autonomy ‘heart beat’ to diagnose failure can suffer from a bug in which the heart beat is running even though other significant autonomy features within the autonomy system have failed.”

Koopman’s bottom line: “Don’t make an assumption. Always build a system on the worst-case scenario.”

Methodology to determine “reasonable”

Koopman pointed out, “The degree to which a crash or other mishap ‘reasonably could be anticipated’ per policy wording is in the eye of the beholder.”

It gets even tougher when such matters go to court because “what could be reasonably anticipated” will be judged by a jury.

The automotive industry needs scientific notation for certain catastrophic failures that can be reasonably expected. Koopman said, “Heavy-duty safety standards always publish numbers” [of failure rates]. Engineers need a target, so that they will know what they’re shooting for, he noted.

Can you trust your EDR?

The current policy discussion centres on data recording for capturing and analysing driving situations. Koopman said, “That is surely important,” but the bigger issue is the credibility of the data and the conclusions drawn from interpreted data.

An automotive company might claim that its “autopilot” was not engaged when a crash happened, because that’s how the data is recorded. But if this conclusion is based on output from a process that might be defective, “that data cannot be used to exonerate the computational process from having defects, since it is telling you what it “thinks” is happening rather than what is actually happening,” Koopman explained.

In Koopman’s opinion, the federal requirement “should be expanded” so that “a safety assessment should analyse the credibility of various data that has been recorded.”

Indeed, Egil Juliussen, director research, Infotainment & ADAS at IHS Automotive, discussed in a recent interview with EE Times the importance of allowing investigators to gain access to raw data. He said, “While the DoT document doesn’t specify which data to record, I think camera information should be recorded.”

Koopman said that such a video footage might not lead to definitive conclusions, but such a raw video footage helps to understand the situation where a car was in.

Components’ end of life

Beyond crashes, “components can be also expected to fail in normal operation,” said Koopman. He explained, “an embedded system [or chips inside] could fail because it is getting too old, and it has its own end of life.”

The DoT’s policy statement should make more explicit that diagnostics should “continually provide extremely high coverage of all safety‐relevant hardware and software component faults over the operational life of the vehicle,” Koopman said.

Pittsburgh left

This is where computer scientists meet social science. Presumably, every autonomous car R&D team is trying to figure out anomalies in traffic laws and driving customs that might be specific to each region. The goal is to fold such information into each autonomous driving algorithm.

Take the example of the Pittsburgh left, Koopman said.

[Pittsburgh Left]
__Figure 1:__ *Pittsburgh Left*

The Pittsburgh left is a driving practice in which the first left-turning vehicle takes precedence over vehicles going straight through an intersection. This is specific to the Pittsburgh area. Although illegal and controversial, it’s what drivers commonly do in Pittsburgh.

Koopman said, “Dealing with exceptional situations in traffic will be crucial to the ultimate success of Highly Autonomous Vehicles,” as is recognised by the DoT’s proposed policy. But he believes the DoT should go further and recommend that the industry collaborate on creating a more thoroughly specified set of traffic rules that take into account the foreseeable exceptions to rules that many drivers encounter every day.

NHTSA now has a perfect opportunity to fund the development of a taxonomy of special situations, to establish a baseline set of requirements, he explained.

Driver takeover strategy

As is recognised by the policy and its adoption of SAE levels 1‐5, autonomy is not a black‐or‐white proposition, said Koopman. If a safety-critical aspect of vehicle operation is fully controlled by a computer (e.g., throttleby‐wire, brake‐by‐wire) then the argument that a human can regain control assumes that the software will actually cede control back to the human.

He said, “A purely electro‐mechanical takeover mechanism is a possibility, but is often missing, inappropriate, or might be subverted by software defects.”

[FAP table]
__Figure 2:__ *F3 in the table above should be changed to “Partially,” Koopman proposes. (Source: Federal Automated Vehicle Policy)*

Koopman argues that the driver must be allowed at any autonomy level to take the wheel. He believes the policy should spell out that the capability for a system to fall‐back to manual driver control should be the subject of safety assessment, even for SAE Level 2 vehicles.

 
« Previously: Independence of automotive safety assessment

Leave a comment