Editing
AI Governance and Policy
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== <span style="color: #FFFFFF;">Remembering</span> == * '''AI governance''' β The collection of rules, standards, processes, and institutions that guide the responsible development and use of AI. * '''AI regulation''' β Legally binding rules governing AI development and deployment, backed by enforcement mechanisms. * '''EU AI Act''' β The world's first comprehensive AI regulation, passed in 2024; categorizes AI systems by risk and imposes requirements accordingly. * '''Risk-based approach''' β Regulating AI based on the potential harm of its application, with higher requirements for higher-risk systems. * '''Prohibited AI practices''' β Uses of AI banned outright under the EU AI Act: social scoring, real-time biometric surveillance in public, subliminal manipulation. * '''High-risk AI''' β AI in critical sectors (medical devices, employment, credit, law enforcement) subject to strict requirements under the EU AI Act. * '''Conformity assessment''' β A mandatory evaluation process for high-risk AI systems before market deployment; can be self-assessment or third-party. * '''AI auditing''' β Systematic evaluation of an AI system's behavior, performance, fairness, and compliance with standards. * '''Algorithmic accountability''' β The principle that organizations deploying AI must be able to explain and justify automated decisions. * '''NIST AI Risk Management Framework (AI RMF)''' β A voluntary US framework for managing AI risks through GOVERN, MAP, MEASURE, MANAGE functions. * '''IEEE P7000''' β A family of IEEE standards for addressing ethical concerns in AI system design. * '''Bias audit''' β A systematic evaluation of an AI system for discriminatory patterns across demographic groups. * '''Disparate impact''' β When a neutral policy or algorithm disproportionately disadvantages a protected class, even without discriminatory intent. * '''Red-teaming''' β Adversarial testing where experts attempt to cause harmful, unsafe, or unintended behavior in an AI system. * '''Watermarking (AI)''' β Technical methods for detecting AI-generated content, required by some regulations. </div> <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information