By akademiotoelektronik, 03/02/2022

Explainable AI: understanding AI is possible! - Silicon

Banks can use artificial intelligence to determine if they should extend credit to their customers, and for how much. Payment service providers use AI to detect and prevent payment fraud. And insurance companies are using AI to automate claims handling for the simplest cases.

These are just a few examples of how AI is being adopted in financial services. With the stakes high, businesses and governments embracing AI and machine learning are under increasing pressure to lift the veil on how their AI models make decisions.

To better understand how AI models arrive at decision-making, organizations are gradually turning to explainable AI.

What is Explainable AI?

AI models are extremely complex, so much so that it is impossible for humans to precisely understand the exact calculations of an entire model.

Explainable AI, or XAI, is a set of processes and methods that allow people to understand how the algorithm gets from point A (input data, like a person's financial history) to point B (the conclusion, such as whether or not a loan is approved).

By understanding the expected impact of the model and potential biases, and having access to the full model mapping summary, users can understand the results of machine learning algorithms. Which can be a bias for greater confidence in AI.

The concept is simple. But today, computer-aided instruction (CAI) is difficult, and in some cases it may not yet be possible. It all depends on the size of the model. It is easy for humans to understand small systems and algorithms, where we can integrate the links between multiple data points. But humans don't have the computing power of AI models. A complex system, for example, may contain around 28 million lines of code, which would take even a fast reader more than 3.5 years to read.

Simple systems are not always practical for their intended purpose because, as their name suggests, they remain simple while larger and more complex models can provide much deeper analysis with increased performance.

Currently, CAE can be implemented in several ways:

>> The first is to document how the algorithm was built and fully understand the data it was trained with. The data must be relevant for the intended use, to ensure that it is appropriate and to determine if it is likely to be biased.

>> The second point is the transparency of the calculations. An algorithm that is very complex and requires in-depth knowledge will not be as easy to understand as an algorithm designed with explainability in mind.

>> Finally, we must build a continuous cycle of explainable systems and put in place tools that allow developers to understand how an algorithm works. By sharing this knowledge with other AI developers, explainability can quickly become easier to implement.

The construction of these complex and large sets leads to the idea that IAX is much more intensive in computing resources than another form of AI. Powerful computing platforms are needed, especially for continuous learning models that are constantly growing in size.

How Does Explainable AI Work?

Although XAI process standardization is still being defined, there are a few key points that resonate with the industries that implement it. Who do we explain the model to? How precise should the explanation be? And what part of the process needs to be explained?

Explainability boils down to this: What exactly are we trying to explain, to whom and why?

Understanding the origin of a model requires asking a few questions: How was the model trained, how was the data used, and how were biases in the calculations measured and mitigated?

These questions are the data science equivalent of explaining the school your financial analyst attended – with who their teachers were, what they studied, and what grades they got. obtained. Getting the result right is more about process and paper trails than pure AI, but it's key to building trust in a model.

Most explanations of the global model fall into one of two camps.

The first is a technique sometimes called “proxy modeling,” which consists of simpler, easier-to-understand models, like decision trees, that can roughly describe the AI ​​model. We can also construct surrogate models from the explanations of many individual decisions. Surrogate models give more of a "feel" of the overall model, rather than a precise scientific understanding.

The second approach is to “design for interpretability”. This allows the AI ​​model to be designed and trained from smaller, simpler pieces, resulting in models that are still powerful, but whose behavior is much easier to explain.

Why can the IAX best explain individual decisions?

Currently, the best-understood area of ​​CAI is that of individual decision-making: why someone didn't get a loan, for example.

Some techniques, like LIME or SHAP, used in combination with other XAI methods offer very literal mathematical answers to such questions, which can be presented to data scientists, managers, regulators and consumers.

The Explainable Machine Learning in Credit Risk Management use case uses SHAP values ​​to identify the most important variables for decision making in credit risk management. By analyzing and grouping the explanatory data of the constituents of a portfolio into clusters with very similar data, it is possible to deeply understand the inner workings of a trained model.

The SHAP method decomposes the contributions of variables to the probability of the predicted outcome. Each data point (i.e. a credit or loan customer in a portfolio) is not only represented by input features, but also by the contributions of those input features to the prediction of the machine learning model.

This can reveal segmentations of data points (customers) where each of these clusters contains very similar decision criteria, which are not solely based on their input variables. These clusters encapsulate the mechanics of the machine learning model and represent how the model makes its decisions – meaning users better understand what the model has learned in order to verify its decision.

These clusters can also highlight trends, anomalies, hotspots, emerging effects, and tipping points in the data, all of which can be analyzed and understood.

The future of AI is explainable

Industries and governments around the world are already trying to implement updated IAX guidance. There is no standard yet, and performance requirements vary depending on the model, level of risk, data and context of what needs to be understood.

While a healthy debate remains on how to implement it, XAI can be used to understand model outputs and it is part of a broader AI risk management practice. This should lead to greater confidence in AI and therefore to wider adoption, better inclusion and greater accessibility of these technologies, especially in the public sector.

Tags: