FAGMA HealthTech Research

A breakthrough in safety-critical machine learning systems could lead to safer implementation in high-risk environments, such as autonomous driving and healthcare.

A breakthrough in safety-critical machine learning systems could lead to safer implementation in high-risk environments, such as autonomous driving and healthcare.

The breakthrough could lead the way in developing a framework suitable for safe implementation in high-risk environments

SwissCognitiveResearchers from top US colleges have devised a method for predicting failure rates of safety-critical systems 

The neural bridge sampling method provides the ability to assess risks associated with deploying complex systems in safety-critical environments

The breakthrough could lead the way in developing a framework suitable for safe implementation in high-risk environments

Artificial intelligence () is coming at us fast. It’s being used in the apps and services we plug into daily without us really noticing, whether it’s a personalized ad on Facebook, or Google recommending how you sign off your email. If these applications fail, it may result in some irritation to the user in the worst case. But we are increasingly entrusting and to safety-critical applications, where system failure results in a lot more than a slight UX issue.

One of the most significant examples of this is in autonomous vehicles, where the safety of systems is paramount to the technology’s adoption and acceptance in society.

In 2018, Elaine Herzber was hit and killed by an Uber on a pilot test when the backup driver (who has this week been charged) failed to intervene. A Tesla vehicle’s fatal crash in California on March 23 of this year raised a further cloud over the safety and readiness of self-driving technology.

Safety-critical systems are occupying wider roles in anything from robotic surgery, pacemakers and autonomous flight systems. Any kind of failure in these cases could lead to injury, death or, perhaps at best, serious financial or reputational damage.

According to researchers from MIT, Stanford University, and the University of Pennsylvania, this could all be about to change.

A breakthrough
In a recent paper titled Neural Bridge Sampling for Evaluating Safety-Critical Autonomous Systems, published on arXiv, the neural bridge sampling method specified in the paper draws on decades-old statistical techniques and builds upon a simulation testing framework that evaluates black box systems.

[…]

Read more: www.techhq.com

1 Comment

Leave a Reply