top of page

XAI: explainable AI

1 February 2021
Meirav Barsadeh
illustration of man

Explainable Artificial Intelligence is a term that refers to AI technology implementation techniques and methods, that can be explained to human beings. While early AI systems were easy to interpret, recent years have seen the rise of a new type systems. These systems harbor algorithms that can detect and offer new knowledge as well as suitable recommendations yet cannot explain the reason for these recommendations.

The more we involve AI in our day to day lives, the more we must be able to rely on the decisions these autonomous systems make or recommend. A central question that illuminated the need for explainable systems was "is an AI-based system able to make it easier to understand, perceive and detect its decision-making process?".


Explainable systems are systems which involve processes that do not require additional human processing. A set of tools and techniques meant to help us humans to understand and interpret processes and predictions made by machines. These systems explain each stage of the process: what has been done up till now, what is being done now and what will be done in the future. The point is to grant human interpretation to the model's 'behavior' and expose the data on which the decisions are based.


Why is this important?


Understanding how AI systems reach their conclusions is critical. Especially when the decisions derived from these systems affect people's lives, as is the case in medical or legal issues. in fields in which trust is critical for cooperation, the more logical a decision-making process seems the more users will tend to accept, adopt the process and act accordingly.

Another aspect of the importance of utilizing explainable systems is responsibility and control. At the end of the day, the responsibility for making decisions is beyond the system and technology. It's not enough to implement the system's recommendations. Decision makers should be able to explain the motives for their actions and the factors that led them to them.


A third and last factor is our ability to detect incorrect results derived possibly from biased data and trace the way the AI made to reach this result. While the result is incorrect, it is worthwhile to fix the basic problem to prevent them from repeating themselves.


In conclusion, the field of AI is growing rapidly. In five years, the field will surely look much different than it does currently. Explainable AI will become a more popular tool. We can patiently observe how these systems will affect our daily lives.



bottom of page