Editing
Explainable AI
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== <span style="color: #FFFFFF;">Understanding</span> == There is a fundamental tension in AI between **model complexity and interpretability**: the most accurate models (deep neural networks, gradient boosting ensembles) are the least interpretable, while the most interpretable models (linear regression, shallow decision trees) are often less accurate. XAI attempts to navigate this tension. **Two strategies**: **Intrinsically interpretable models**: Choose model architectures that are interpretable by design. Linear models explain predictions as weighted feature sums. Generalized Additive Models (GAMs) extend this to non-linear feature contributions. Decision trees can be visualized. Rule lists produce human-readable decision logic. For high-stakes decisions, these are often preferable even at some accuracy cost. **Post-hoc explanation**: Train any model, then explain its predictions afterward. SHAP computes each feature's Shapley value β its average marginal contribution across all possible feature orderings β providing a theoretically principled attribution. LIME fits a local linear model around the prediction to approximate the complex model's behavior in that region. **The faithfulness problem**: Post-hoc explanations don't explain the model β they explain a simpler approximation of the model. An explanation that looks plausible may not accurately reflect the model's actual reasoning. This is known as the faithfulness problem and is a fundamental limitation of post-hoc XAI. **Explanation types by audience**: Data scientists need feature attributions and global model behavior. Domain experts need contrastive explanations ("why X not Y?"). End users affected by decisions need natural language explanations. Regulators need documentation and audit trails. </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information