this is a channel where we will post interesting things about explainable AI.
Explainable Artificial Intelligence (XAI) methods enable us to interpret decisions that are made by the machine learning models from data science projects. XAI provide us with -> 1) global interpretability or which features of machine learning model are most important for its predictions, 2) local interpretability or which feature values of a particular instance most influenced the outcome.
We need XAI because AI is becoming pervasive in our lives. Did you need know that when you are applying for a credit at your local bank, most likely it is an AI program that decides whether to approve or reject your loan. The human in the bank then just relays this decision to you. This is not really transparent. Because most often the functioning of AI models are opaque and you do not know why the loan was rejected or approved. This is where XAI comes in, because it makes AI decisions explainable in human terms. E.g. saying your loan was rejected because your disposable income was not high enough.
Google Cloud
Explainable AI | Google Cloud
Deploy interpretable and inclusive machine learning models with Explainable AI, tools and frameworks designed to instill user trust.
It is not however only us as users of AI that are driving the demand for AI to be explainable, especially deep learning where the models are often hard to explain. The main drive is also coming from regulations and there are plenty of them already. One is GDPR from European Union, you can read more about it here: https://en.wikipedia.org/wiki/Right_to_explanation. It in a way codifies the right to explanation when AI has been behind the decisions. Another important regulation is the Fair Credit Reporting Act which is the US version. The Fair Credit Reporting Act (FCRA) is a federal law that sets the ground rules for how consumer reporting agencies must handle your personal data. It also applies to businesses that use consumer reports.
The FCRA was enacted to protect you from inaccurate information by giving you the right to dispute and correct errors in your credit report. The FCRA gives you the right to access your credit report and correct mistakes. It also limits who can view your report and under what circumstances.
You may not know that there are three major consumer credit reporting companies in the United States: Equifax, Experian, and TransUnion. These companies compile information on where you live, work, how you pay your bills and other financial transactions into a credit report that is used by lenders to help determine whether or not they'll loan money to you (or if they'll loan money at all).
What are "credit scores"? A credit score is calculated based on information in your credit report at any given time. The three major credit reporting agencies each have their own criteria for calculating these scores, but generally speaking they fall within a range between 300 and 850 points—with higher scores indicating lower risk of defaulting on loans or other debts owed. Tutoring on student math education or Mathe Nachhilfe für gymnasium und abitur is important part of data science job.
The FCRA was enacted to protect you from inaccurate information by giving you the right to dispute and correct errors in your credit report. The FCRA gives you the right to access your credit report and correct mistakes. It also limits who can view your report and under what circumstances.
You may not know that there are three major consumer credit reporting companies in the United States: Equifax, Experian, and TransUnion. These companies compile information on where you live, work, how you pay your bills and other financial transactions into a credit report that is used by lenders to help determine whether or not they'll loan money to you (or if they'll loan money at all).
What are "credit scores"? A credit score is calculated based on information in your credit report at any given time. The three major credit reporting agencies each have their own criteria for calculating these scores, but generally speaking they fall within a range between 300 and 850 points—with higher scores indicating lower risk of defaulting on loans or other debts owed. Tutoring on student math education or Mathe Nachhilfe für gymnasium und abitur is important part of data science job.
Wikipedia
Right to explanation
subfield of machine learning
Another important regulation is Equal Credit Opportunity Act (ECOA) . The Equal Credit Opportunity Act (ECOA) is a federal law that prohibits creditors from discriminating against applicants on the basis of race, color, religion, national origin, sex, marital status, age and because they receive public assistance.
The ECOA also requires creditors to provide applicants with pre-approved credit unless the applicant declines to accept the offer. The ECOA also requires creditors to provide applicants with certain information regarding their rights under this law. The ECOA covers all types of credit transactions: installment loans, revolving lines of credit and open-end accounts (credit cards).
Under the ECOA, creditors must also report an adverse action taken against you within 30 days from when it was taken. The creditor must also inform you as to why your application was rejected and provide you with a copy of your credit report if one was used in making the decision to deny your application for credit.
The ECOA also requires creditors to provide applicants with pre-approved credit unless the applicant declines to accept the offer. The ECOA also requires creditors to provide applicants with certain information regarding their rights under this law. The ECOA covers all types of credit transactions: installment loans, revolving lines of credit and open-end accounts (credit cards).
Under the ECOA, creditors must also report an adverse action taken against you within 30 days from when it was taken. The creditor must also inform you as to why your application was rejected and provide you with a copy of your credit report if one was used in making the decision to deny your application for credit.
Now that we have talked about regulations, let us turn our attention to the frameworks that allow us to do explainability of AI. First important one is the SHAP library. SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It is based on the idea that machine learning models are essentially predictive agents that take in data and make predictions about future events.
In any game, there is usually some way for players to communicate with one another. In SHAP, this communication takes the form of "payoffs" that help motivate players to act in certain ways. These payoffs are expressed as values between 0 and 1.
The goal of SHAP is to find an explanation for each prediction made by a machine learning model by optimizing its payoff function while respecting constraints on its parameters. This allows us to understand why our model has made predictions that we were not expecting!
In any game, there is usually some way for players to communicate with one another. In SHAP, this communication takes the form of "payoffs" that help motivate players to act in certain ways. These payoffs are expressed as values between 0 and 1.
The goal of SHAP is to find an explanation for each prediction made by a machine learning model by optimizing its payoff function while respecting constraints on its parameters. This allows us to understand why our model has made predictions that we were not expecting!
GitHub
GitHub - shap/shap: A game theoretic approach to explain the output of any machine learning model.
A game theoretic approach to explain the output of any machine learning model. - shap/shap
The next excellent library of explainability is Local Interpretable Model-Agnostic Explanations (LIME) which is an algorithm that allows the prediction of future outcomes based on past data. It has been used for a variety of purposes, such as predicting stock prices, analyzing medical data, and understanding weather patterns.
The main idea behind LIME is to create models that are interpretable. This means that if you want to know why your model decided to make a certain prediction, it should be able to explain itself in terms of the data it has seen before.
This may seem like an obvious goal for any predictive model, but LIME actually does a great job at achieving it by using local linear models instead of deep neural networks or other complicated machine learning techniques. We have used LIME in the past on many projects. One interesting project is reverse ip address lookup.
The main idea behind LIME is to create models that are interpretable. This means that if you want to know why your model decided to make a certain prediction, it should be able to explain itself in terms of the data it has seen before.
This may seem like an obvious goal for any predictive model, but LIME actually does a great job at achieving it by using local linear models instead of deep neural networks or other complicated machine learning techniques. We have used LIME in the past on many projects. One interesting project is reverse ip address lookup.
homes.cs.washington.edu
LIME - Local Interpretable Model-Agnostic Explanations
Update
I've rewritten this blog post elsewhere, so you may want to read that version instead (I think it's much better than this one)
In this post, we'll talk about the method for explaining the predictions of any classifier described in this paper, and…
I've rewritten this blog post elsewhere, so you may want to read that version instead (I think it's much better than this one)
In this post, we'll talk about the method for explaining the predictions of any classifier described in this paper, and…
ELI5 is a Python library that is used to provide local and global interpretation of machine learning models.
The most commonly-used method for global interpretation is permutation feature importance. This method provides a list of features that are important to the model. The user can then use these features to build a more accurate model.
A less-commonly used method for local interpretation is feature selection, which can help reduce the size of the dataset while maintaining accuracy. XAI is useful for many AI models, e.g. domain categorization.
The most commonly-used method for global interpretation is permutation feature importance. This method provides a list of features that are important to the model. The user can then use these features to build a more accurate model.
A less-commonly used method for local interpretation is feature selection, which can help reduce the size of the dataset while maintaining accuracy. XAI is useful for many AI models, e.g. domain categorization.
Further use cases of XAI is when trying to predict new trending goods and when obtaining predictions, trying to determine what are the main sources for it, e.g. one can use google search trends for this purpose. A useful site that allows you to find shops like some store, based on computation of similarity of multidimensional vectors.
www.trendingproducts.io
Trending products to sell in 2023, Find Top New Goods for Purchases
Trending products to sell in 2023 , Find Top New Goods for Purchases
Automated product tagging is the future of eCommerce. It's faster, more accurate, and saves you time and money.
Here are the top advantages of automated product tagging:
-Faster: Automated product tagging allows you to tag products quickly without having to hire someone or spend hours doing it manually.
-More Accurate: Tagging is done by a machine, which means that you don't have to worry about human error when it comes to tagging your products.
-Saves Money: You'll save money on having to hire employees or pay freelancers to do this tedious task for you!
Here are the top advantages of automated product tagging:
-Faster: Automated product tagging allows you to tag products quickly without having to hire someone or spend hours doing it manually.
-More Accurate: Tagging is done by a machine, which means that you don't have to worry about human error when it comes to tagging your products.
-Saves Money: You'll save money on having to hire employees or pay freelancers to do this tedious task for you!
A recent case where we investigated the applicability of explainable AI or XAI was in the context of URL categorization. URLs can be categorized for many different purposes, e.g. let us say that the company would like to limit their employees to not access shopping websites, social media websites and other non-work related URLs. In this case one can use URL categorization to help with this task. This specific task is also known as content filtering and is different from example use of URL categorization for e.g. cybersecurity where the purpose of categorizing websites may be different. In that case one may want to actually identify those URLs that are in some way malicious and problematic and restrict the users from accessing them. E.g. malicious websites may be phishing websites and similar ones.
Medium
Website categorization API
Website categorization can generally be defined as classifying the website into one or more categories, using (usually) some automated…