AI Risk Board
Share
Explore

icon picker
AI Risk Register

A live, collaborative effort to document and track known and potential risks associated with Artificial Intelligence (AI) by and for intelligence analysts.
We’re adding more interactivity and content soon. Stay tuned!
1

Description
Category
Threat Level
Time Horizon
Potential Mitigation Strategies
Low Prob. Est.
High Prob. Est.
1
AI systems inheriting biases from training data, leading to unfair outcomes.
Bias & Discrimination
Medium
Now
Use diverse and representative training data, apply fairness metrics, and involve human oversight in decision-making.
.7
1
2
AI-generated false information or propaganda used to deceive or manipulate.
Misinformation & Disinformation
High
Now
Develop AI-driven detection and verification tools, promote digital literacy, and establish content regulation policies.
1
1
3
AI-generated fake images, videos, or audio, making it hard to discern authenticity.
Deepfakes & Synthetic Media
High
Now
Develop advanced detection methods, establish standards for content disclosure, and promote public awareness of deepfakes.
1
1
4
Ethical, legal, and security concerns related to AI-driven weapons and surveillance.
Autonomous Weapons & Surveillance
High
Near Future (1~3 Years)
Establish international regulations and norms, ensure human-in-the-loop oversight, and promote transparency and accountability.
.7
1
5
AI systems deceived or compromised by manipulated input data.
Adversarial Attacks
Medium
Now
Develop robust AI models resistant to adversarial attacks, incorporate data integrity checks, and establish monitoring systems.
.5
.9
6
Overreliance on AI, leading to a lack of critical thinking and human judgment.
Dependence on AI
Medium
Now
Ensure a balanced human-AI collaboration, emphasize the importance of human input, and encourage the development of critical thinking skills in analysts.
.3
.7
7
AI technologies used by state or non-state actors for harmful purposes.
Malicious Use of AI
High
Now
Implement strict export controls, promote international cooperation and norms, and develop countermeasures to detect and mitigate malicious AI use.
.7
1
8
Emergent behaviors in AI systems that are difficult to predict or control.
Unintended Consequences
Medium
Near Future (1~3 Years)
Invest in safety research, develop interpretable and explainable AI models, and establish guidelines and best practices for AI development.
.1
.3
9
AI-related ethical and legal questions, such as accountability and human rights.
Ethical & Legal Concerns
Medium
Now
Establish ethical guidelines and legal frameworks for AI development and use, promote transparency and accountability, and involve ethicists and regulators in AI governance.
.5
.9
10
Unauthorized access, misuse, or breaches of sensitive data used in AI systems.
Data Privacy & Security
High
Now
Implement strict data access controls, encryption, anonymization, data retention policies, and regular security audits.
.7
1
There are no rows in this table

Share
 
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.