mirror of
https://github.com/The-Art-of-Hacking/h4cker.git
synced 2024-12-21 20:36:08 +00:00
40 lines
3.1 KiB
Markdown
40 lines
3.1 KiB
Markdown
# AI Risk Management Frameworks and AI Security Resources
|
||
|
||
## NIST Resources
|
||
- [NIST Risk Management Framework](https://www.nist.gov/itl/ai-risk-management-framework): used to to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
|
||
- [Roadmap for the NIST Artificial Intelligence Risk Management Framework](https://www.nist.gov/itl/ai-risk-management-framework/roadmap-nist-artificial-intelligence-risk-management-framework-ai)
|
||
- [NIST “Language of Trustworthy AI: An In-Depth Glossary of Terms” ](https://airc.nist.gov/AI_RMF_Knowledge_Base/Glossary)
|
||
- [NIST AI Technical and Policy Documents](https://airc.nist.gov/AI_RMF_Knowledge_Base/Technical_And_Policy_Documents)
|
||
|
||
## Europe
|
||
|
||
- [European AI Act](https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai)
|
||
- [AI Strategy in Europe](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A237%3AFIN)
|
||
- [European Commission Artificial Intelligence – Questions and Answers](https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683)
|
||
|
||
## Canada
|
||
- [Artificial Intelligence and Data Act](https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act)
|
||
- [Canadian Guardrails for Generative AI – Code of Practice
|
||
](https://ised-isde.canada.ca/site/ised/en/consultation-development-canadian-code-practice-generative-artificial-intelligence-systems/canadian-guardrails-generative-ai-code-practice)
|
||
|
||
## ISO
|
||
- [ISO AI Website](https://www.iso.org/sectors/it-technologies/ai)
|
||
- [ISO/IEC 42001:2023 Artificial intelligence Management system](https://www.iso.org/standard/81230.html)
|
||
- [ISO/IEC 23894:2023 Artificial intelligence Guidance on risk management](https://www.iso.org/standard/77304.html)
|
||
- [ISO/IEC 23053:2022 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)](https://www.iso.org/standard/74438.html)
|
||
|
||
## Cloud Security Alliance
|
||
- [CSA's Securing LLM Backed Systems: Essential Authorization Practices](https://github.com/The-Art-of-Hacking/h4cker/blob/master/ai_research/ai_risk_management/Securing%20LLM%20Backed%20Systems%20-%20Essential%20Authorization%20Practices%2020240806.pdf)
|
||
|
||
## Additional Securing AI Resources
|
||
|
||
- [NSA/DoD - Joint Guidance on Deploying AI Systems Securely](https://media.defense.gov/2024/Apr/15/2003439257/-1/-1/0/CSI-DEPLOYING-AI-SYSTEMS-SECURELY.PDF)
|
||
- [MITRE ATLAS](https://atlas.mitre.org/)
|
||
- [OWASP Top 10 for LLM Applications](https://genai.owasp.org/)
|
||
- [OWASP AI Security and Privacy Guide](https://owasp.org/www-project-ai-security-and-privacy-guide/)
|
||
- [Securing Your AI: A Step-by-Step Guide for CISOs](https://hiddenlayer.com/research/how-well-do-you-know-your-ai-environment/)
|
||
- [Securing Your AI: A Step-by-Step Guide for CISOs PT 2](https://hiddenlayer.com/research/securing-your-ai-a-step-by-step-guide-for-cisos-pt2/)
|
||
- [CSA Securing LLM Backed Systems](https://github.com/The-Art-of-Hacking/h4cker/blob/master/ai_research/ai_risk_management/Securing%20LLM%20Backed%20Systems%20-%20Essential%20Authorization%20Practices%2020240806.pdf)
|
||
## Academia
|
||
- [MIT AI Risk Database](https://airisk.mit.edu/)
|