From 99f392bb17dfeb5c9305f2e4d53e2731a9f32044 Mon Sep 17 00:00:00 2001 From: Omar Santos Date: Wed, 24 Jul 2024 18:27:40 -0400 Subject: [PATCH] Update ai_security_tools.md --- ai_research/ai_security_tools.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/ai_research/ai_security_tools.md b/ai_research/ai_security_tools.md index ab555fe..226ed19 100644 --- a/ai_research/ai_security_tools.md +++ b/ai_research/ai_security_tools.md @@ -18,6 +18,10 @@ _Products that examine or test models for security issues of various kinds._ * [Mindgard AI](https://mindgard.ai) - Identifies and remediates risks across AI models, GenAI, LLMs along with AI-powered apps and chatbots. * [Protect AI ModelScan](https://protectai.com/modelscan) - Scan models for serialization attacks. [code](https://github.com/protectai/modelscan) * [Protect AI Guardian](https://protectai.com/guardian) - Scan models for security issues or policy violations with auditing and reporting. +* [TextFooler](https://github.com/jind11/TextFooler) - A model for natural language attacks on text classification and inference. +* [LLMFuzzer](https://github.com/mnns/LLMFuzzer) - Fuzzing framework for LLMs. +* [Prompt Security Fuzzer](https://www.prompt.security/fuzzer) - a fuzzer to find prompt injection vulnerabilities. +* [OpenAttack](https://github.com/thunlp/OpenAttack) - a Python-based textual adversarial attack toolkit. ## Prompt Firewall and Redaction