Explainable artificial intelligence

Definition
Explainable AI (XAI) (or Transparent AI) is an artificial intelligence (AI) whose actions can be easily understood by humans. It contrasts with "black box" AIs that employ complex opaque algorithms, where even their designers cannot explain why the AI arrived at a specific decision. XAI can be used to implement a social right to explanation. Transparency rarely comes for free; there are often tradeoffs between how "smart" an AI is and how transparent it is, and these tradeoffs are expected to grow larger as AI systems increase in internal complexity. The technical challenge of explaining AI decisions is sometimes known as the interpretability problem.