Are We Too Hard On Artificial Intelligence For Autonomous Driving?
I recently attended and presented at Detroit’s “Implementation of ISO 26262 & SOTIF” conference. Its subtitle was “Taking an Integrated Approach to Automotive Safety.” After three days, my head was spinning with numbers of ISO/SAE and other standards. And at the end of day two, after yet another example that tricked autonomous driving prototypes into behaving wrongly, I sighed and asked whether anybody else would feel bad for these AIs. It feels like we ask AI to do much more than any human being could ever do. My question got some chuckles but caused some honest discussion about how to quantify autonomous driving capabilities.
Automotive IQ, a division of IQPC, organized this conference and attracted a crowd of about 70. It was an intimate setting and offered great networking with a mix of research and industry participants.
Day one was a focus day, during which we discussed the official publication of ISO 21448, “Safety of the intended functionality.” It sits somewhat above ISO 26262 and guides the practical design, verification, validation measures, and activities during the operation phase needed to achieve and maintain the SOTIF. In discussions with other tool vendors that deal further up in the design chain with autonomous driving scenarios, I heard that customers consider SOTIF important and that some vendors in the scenario modeling space have offerings that can help.
Day two started with a panel discussion moderated by GM’s Technical Fellow for System Safety, Rami Debouk, with Mathieu Blazy-Winning, Director of Functional Safety, NXP Semiconductors, and Philip Koopman, Associate Professor, Carnegie Mellon University.
Philip Koopman outlined a standards-based systems engineering approach with essential vehicle safety functions addressed by FMVSS and NCAP, security by SAE J3061 and SAE 21434, and equipment faults with ISO 26262 functional safety mechanisms. For environment and edge cases for dynamic driving functions, ISO 21488 and SaFAD/ISO TR4804 need to be applied, and ANSI/UL 4600 addresses system safety for highly automated vehicle safety cases beyond dynamic driving. And let’s remember road testing safety covered in SAE J3018.
From an OEM perspective, that’s a lot of standards to cover.
NXP’s Mathieu Blazy-Winning outlined how they, as a Tier 2 supplier, work towards complying with four standards to achieve functional safety for automotive and industrial application:
- IATF 16949 – harmonizing the different assessment and certification systems worldwide in the supply chain for the automotive sector.
- ISO 26262:2018 – using particular Hazard Analysis and Risk Assessment (HARA) built into the standard.
- IEC 61508:2010 – allowing more flexibility than ISO 26262 for Hazard and Risk Analysis to evaluate hazards, including techniques common in the ISO 12100 standard.
- Automotive SPICE – “Automotive Software Process Improvement and Capability Determination” to assess the performance of the development processes of OEM suppliers in the automotive industry.
On top of compliance with those, Mathieu’s team is also monitoring ISO 21448 SOTIF, ISO TR 9839 “Predictive Maintenance,” UL 4600 “Evaluation of Autonomous Products,” J3131 “Definition of Terms for Autonomous,” IEEE P2851 “Data Format for Interoperability” and the Accellera Functional Safety Working Group for automation, interoperability, and traceability.
Is your head spinning yet?
During the discussion, I asked how vendors assess the ROI for investing in all these standards and whether some of them are more important than others. Prof. Koopman brought up the hierarchy of concurrent safety needs shown above. It’s not possible to take shortcuts. The safety aspects build on each other. That’s why requirements trickle through all the way to the design chain to IP vendors, and they need to be traced.
The safety needs are complex and different at every design chain level. And they build on each other.
On day three, my presentation as a supplier in the design chain focused on the capabilities customers expect in the Network-on-Chip (NoC) domain that needs safety features built into the interconnect capabilities when used in automotive. I also re-emphasized the vision on scalability as a looming problem in safety analysis that our Fellow and Safety Lead, Stefano Lorenzini, had charted earlier this year. In addition, I addressed the issue of requirements tracing.
A lot of the discussion during the conference centered on AI and its related safety aspects. We discussed examples that lead to misbehaving artificial intelligence. The sunlight at a particular angle changes the perception of a traffic light to have multiple lights on. Most humans, me included, would likely fail here as well. Per NCAP, Europe had 51 per million deaths in 2019, reducing road deaths by 6% over the last five years alone. Autonomous technologies intend to steepen that curve. A recent study from IDTech found that poor performance of the system caused only 1% of autonomous vehicle accidents, i.e., 2 out of 83 cases.
So, are we too hard on AI, having to comply with all these standards? Probably. But it is still necessary to do so, given that responsibility and liability for accidents shift away from humans to machines and their creators. And there are still lots of questions to answer.
Exciting times ahead, let’s make sure they are safe!
Frank Schirrmeister is vice president of solutions and business development at Arteris. He leads activities in the automotive, data center, 5G/6G communications, mobile, aerospace and data center industry verticals and the technology horizontals artificial intelligence, machine learning, and safety. Before Arteris, Schirrmeister held various senior leadership positions at Cadence Design Systems, Synopsys, and Imperas, focusing on product marketing and management, solutions, strategic ecosystem partner initiatives and customer engagement.