What Makes a Good Diagnosis: An Algorithm to Detect Biased Training Data

Loading...
Thumbnail Image

Authors

Schneider, Madeleine
Thomson, Robert

Issue Date

2019

Type

Conference presentations, papers, posters

Language

Keywords

bias , training data , AI , machine learning , classification

Research Projects

Organizational Units

Journal Issue

Alternative Title

Abstract

There have been a number of high profile cases of artificial intelligence (AI) systems making culturally inappropriate predictions when classifying images of individuals of different races. These predictions were due in part to implicit biases within the training data. In the case of well-being, there are critical situations where AI systems can be of use, including diagnoses, treatment variation, and care decisions. The challenge of implicit bias in these critical situations is that lives are potentially on the line. Current AI approaches are generally black-box in that we cannot understand the features that went into a particular classification/decision. In this paper we specifically look at a combination of silhouette score and alpha diversity to identify the presence of implicit bias within a data-set. Finally, we discuss a test case where this algorithm could improve our understanding of automated diagnosis tools, specifically in diagnosing borderline personality disorder.

Description

Citation

Schneider, Madeleine, and Robert Thomsons. "What Makes a Good Diagnosis: An Algorithm to Detect Biased Training Data." In AAAI Spring Symposium: Interpretable AI for Well-being. 2019.

Publisher

N/A

License

Journal

Volume

Issue

PubMed ID

DOI

ISSN

EISSN