Artificial Intelligence (AI) is now being embraced across a broad range of industries such as retail,
manufacturing, education, construction, law enforcement, finance, and healthcare.
AI is fast becoming integral to our daily lives - from image to facial recognition systems, machine
learning powered predictive and prescriptive analytics, hyper-personalized systems,
conversational applications, autonomous vehicles, identification of symptoms across diseases - the
applications are numerous. With such a heavy reliance on the capabilities of AI, the need to trust
these AI systems with all aspects of decision making is becoming critical. The predictions and
prescriptions churned out by AI enabled systems are having a tremendous impact on how we view
and experience life, death, and personal wellness. This is especially true of AI systems used in
healthcare, driverless cars, or even drones deployed during warfare.
However, most of us have little visibility and knowledge on how AI systems make the decisions they
do. In the absence of this clarity, it is even more difficult to comprehend how the results are being
applied and consumed across various fields. Many of the techniques and algorithms used for
machine learning are either virtually opaque, or defy easy examination. This is largely true for most
of the popular algorithms currently in use; specifically, deep learning neural network approaches.
Fortunately for us, there is an aspect of AI, called Explainable AI, which can direct computer
systems to operate as expected, and generate transparent explanations for decisions they make. In
the future we will need to focus more on the Explainable AI component in order to further build our
trust on AI systems that are used in decision-making. In this presentation, we will explore various
algorithms, and techniques, that support ease of comprehension, and interpretability, of these
machine learning models.