Towards the Explainability in AI: Past, Present, and Future
Explainable AI is not a new topic. The earliest work on Explainable AI could be found in the literature published 60 years ago, where expert systems explained their results via the applied rules or by backtracking the reasoning. Since AI research began, scientists have argued that intelligent systems should explain the AI results, mostly when it comes to decisions. In this talk, I begin from knowledge representation formalisms in AI, and traditional machine learning approaches to the latest progress in the context of modern deep learning, and then describe the major research areas and the state-of-the-art approaches in recent years. Three main topics will be briefly covered in this talk: (1) interpretability and explainability in pure (logic) reasoning, (2) interpretability and explainability in pure (machine) learning, and (3) interpretability and explainability in hybrid AI that synergises pure reasoning with pure learning. The talk ends with a discussion on challenges and future directions.