Ensuring Safety in Autonomous Systems: The Power of Perception Contracts

Mathematical assurance for safer autonomous systems—revolutionizing self-driving cars and drones.

Share This Post

Driverless cars and autonomous planes are no longer the stuff of science fiction. In San Francisco alone, two taxi companies have collectively logged an astounding 8 million miles of autonomous driving by August 2023. Additionally, there are over 850,000 registered autonomous aerial vehicles, or drones, in the United States, excluding those employed by the military.

However, these technological advancements have raised significant safety concerns. For instance, in the 10-month period ending in May 2022, the National Highway Traffic Safety Administration reported nearly 400 crashes involving vehicles utilizing various forms of autonomous control. Tragically, these accidents resulted in six fatalities and five individuals suffering serious injuries. The conventional approach to addressing this issue, often referred to as “testing by exhaustion,” involves continuously testing these systems until they are deemed safe. However, it is impossible to guarantee that this exhaustive testing process will uncover all potential flaws. As noted by Sayan Mitra, a computer scientist at the University of Illinois, Urbana-Champaign, “People carry out tests until they’ve exhausted their resources and patience.” Nonetheless, testing alone cannot provide absolute assurances of safety.

Enter Mitra and his team, who have developed a groundbreaking method to ensure the safety of lane-tracking capabilities for cars and landing systems for autonomous aircraft. This innovative approach is now being employed to facilitate drone landings on aircraft carriers, with Boeing planning to test it on an experimental aircraft in the near future. Corina Pasareanu, a research scientist at Carnegie Mellon University and NASA’s Ames Research Center, emphasized the significance of their method, stating, “Their method of providing end-to-end safety guarantees is very important.”

The core of their work revolves around guaranteeing the outcomes of machine-learning algorithms used to inform autonomous vehicles. Most autonomous vehicles consist of two fundamental components: a perceptual system and a control system. The perceptual system provides information such as a car’s position relative to the lane center or a plane’s heading direction and angle with respect to the horizon. This system operates by processing raw data from cameras and other sensory devices through machine-learning algorithms based on neural networks, which essentially replicate the external environment surrounding the vehicle.

These evaluations are then transmitted to a separate control module, which makes decisions based on this perceptual information. For instance, if there is an approaching obstacle, the control module determines whether to apply brakes or steer to avoid it. Luca Carlone, an associate professor at the Massachusetts Institute of Technology, highlights that while the control module relies on well-established technology, “it is making decisions based on the perception results, and there’s no guarantee that those results are correct.”

To offer a robust safety guarantee, Mitra’s team focused on enhancing the reliability of the vehicle’s perception system. Their approach begins by assuming that safety can be assured when a flawless representation of the external world is available. They then analyze the extent of error introduced by the perception system when recreating the vehicle’s surroundings.

The linchpin of this strategy is quantifying the uncertainties involved, referred to as the error band or the “known unknowns,” in Mitra’s terms. This calculation stems from what Mitra and his team have termed a “perception contract.” In software engineering, a contract signifies a commitment that, for a given input to a computer program, the output will fall within a specified range. Determining this range is a challenging task. It involves considerations such as the accuracy of the vehicle’s sensors and its ability to navigate through fog, rain, or solar glare. However, if the vehicle can be kept within a specified range of uncertainty, and if the determination of that range is sufficiently precise, Mitra’s team has demonstrated that safety can indeed be guaranteed.

This approach is akin to a familiar scenario encountered by anyone with an imprecise speedometer. If you know that your speedometer is never off by more than 5 miles per hour, you can always ensure you don’t exceed the speed limit by maintaining a speed 5 mph below what is indicated by your potentially inaccurate speedometer. Similarly, a perception contract provides a comparable guarantee of safety for an imperfect system relying on machine learning.

As Luca Carlone pointed out, the goal isn’t to attain flawless perception; instead, the aim is to achieve a level of accuracy that doesn’t compromise safety. He emphasized that the team’s significant contributions lie in “introducing the entire idea of perception contracts” and providing the methodologies for their construction. To accomplish this, they drew from techniques within the realm of computer science known as formal verification. This field offers a mathematical approach to verifying that a system’s behavior aligns with a specified set of requirements.

Even though the inner workings of neural networks remain somewhat enigmatic, Sayan Mitra’s team demonstrated that it is possible to numerically prove that the uncertainty of a neural network’s output falls within predetermined bounds. If this condition is met, the system can be deemed safe. Mitra explained that they can then furnish a statistical guarantee regarding whether a given neural network will indeed adhere to these established bounds, and to what extent.

Putting these safety guarantees to the test, the aerospace company Sierra Nevada is currently conducting experiments during drone landings on aircraft carriers. This task presents unique challenges compared to autonomous car driving due to the added dimension of flight. As Dragos Margineantu, AI chief technologist at Boeing, explained, landing entails two primary objectives: aligning the plane with the runway and ensuring the runway is clear of obstacles. The collaboration with Sayan Mitra’s team involves obtaining safety guarantees for these crucial functions.

Margineantu noted that simulations utilizing Mitra’s algorithm have shown improvements in the alignment of an airplane before landing. The next phase, slated for later this year, involves applying these systems during the actual landing of a Boeing experimental aircraft. One of the major hurdles in this endeavor, as Margineantu highlighted, is understanding the unknown factors—determining the uncertainty in their estimates—and how these uncertainties impact safety. He underscored that “most errors happen when we do things that we think we know—and it turns out that we don’t.”

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Do You Want To Boost Your Business?

drop us a line and keep in touch

Learn how we helped 100 top brands gain success.

Let's have a chat