Toward Safe Decision-Making via Uncertainty Quantification in Machine Learning

No Thumbnail Available

Authors

Cobb, Adam D.
Jalaian, Brian A.
Bastian, Nathaniel D.
Russell, Stephen

Issue Date

2021-11-02

Type

book-chapter

Language

en_US

Keywords

Research Projects

Organizational Units

Journal Issue

Alternative Title

Abstract

The automation of safety-critical systems is becoming increasingly prevalent as machine learning approaches become more sophisticated and capable. However, approaches that are safe to use in critical systems must account for uncertainty. Most real-world applications currently use deterministic machine learning techniques that cannot incorporate uncertainty. In order to place systems in critical infrastructure, we must be able to understand and interpret how machines make decisions. This need is so that they can provide support for human decision-making, as well as the potential to operate autonomously. As such, we highlight the importance of incorporating uncertainty into the decision-making process and present the advantages of Bayesian decision theory. We showcase an example of classifying vehicles from their acoustic recordings, where certain classes have significantly higher threat levels. We show how carefully adopting the Bayesian paradigm not only leads to safer decisions, but also provides a clear distinction between the roles of the machine learning expert and the domain expert.

Description

Citation

Cobb, A.D., Jalaian, B., Bastian, N.D., Russell, S. (2021). Toward Safe Decision-Making via Uncertainty Quantification in Machine Learning. In: Lawless, W.F., Mittu, R., Sofge, D.A., Shortell, T., McDermott, T.A. (eds) Systems Engineering and Artificial Intelligence . Springer, Cham. https://doi.org/10.1007/978-3-030-77283-3_19

Publisher

Springer

License

Journal

Volume

Issue

PubMed ID

ISSN

EISSN