Editing
Responsible Ai
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== <span style="color: #FFFFFF;">Remembering</span> == * '''Responsible AI''' β A framework for developing and deploying AI systems that are fair, accountable, transparent, and respecting of human rights and values. * '''AI Safety''' β The field concerned with ensuring AI systems behave as intended and do not cause harm, especially as they become more capable. * '''Bias''' β Systematic and unfair discrimination in AI outputs, often reflecting biases present in training data or model design. * '''Fairness''' β The principle that an AI system should not discriminate against individuals or groups based on protected attributes (race, gender, age, etc.). * '''Transparency''' β The property of AI systems being understandable, with decisions that can be explained and audited. * '''Explainability''' β The ability to provide human-understandable reasons for why an AI system made a specific decision. * '''Accountability''' β The principle that someone is responsible for AI system decisions and their consequences. * '''Privacy''' β Protection of individuals' personal data from unauthorized collection, use, or disclosure by AI systems. * '''Differential privacy''' β A mathematical framework for adding calibrated noise to data to protect individual privacy while preserving statistical utility. * '''Adversarial robustness''' β The ability of a model to maintain correct behavior under adversarial inputs designed to fool it. * '''Misuse''' β Intentional use of AI systems for harmful purposes (misinformation, surveillance, autonomous weapons). * '''Hallucination''' β AI-generated content that is factually incorrect, posing risks when AI outputs are trusted without verification. * '''Model card''' β A documentation framework for AI models describing their intended use, performance, limitations, and ethical considerations. * '''Algorithmic auditing''' β Independent evaluation of AI systems for bias, discrimination, or safety violations. * '''AI Act''' β The European Union's comprehensive AI regulation framework, classifying AI systems by risk level. </div> <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information