Date of Degree

5-2022

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Program

Education

Advisor

Alison Buck

Advisor

Michael Frye

Advisor

Lucretia Fraga

Abstract

In the quality inspection industry, the use of Artificial Intelligence (AI) continues to advance to produce safer and faster autonomous systems that can perceive, learn, decide, and act independently. As observed by the researcher interacting with the local energy company over a one-year period, these AI systems’ performance is limited by the machine’s current inability to explain its decisions and actions to human users. Especially in energy companies, eXplainable-AI (XAI) is critical to achieve speed, reliability, and trustworthiness with human inspection workers. Placing humans alongside AI will establish a sense of trust that augments the individual’s capabilities at the workplace. To achieve such an XAI system centered around humans, it is necessary to design and develop more explainable AI models. Incorporating these XAI systems centered around human workers in the inspection industry brings a significant shift in conducting visual inspections. Adding this explainability factor to the AI intelligent inspection systems makes the decision-making process more sustainable and trustworthy by bringing a collaborative approach. Currently, there is a lack of trust between the inspection workers and AI, creating uncertainty among inspection workers about the use of the existing AI models. To address this gap, the purpose of this qualitative research study was to explore and understand the need for human-centered XAI systems to detect anomalies in quality inspection in energy industries.

Share

COinS