From ce62cb7bd6544cebef18e1e539bd08b1702eb65a Mon Sep 17 00:00:00 2001 From: Omar Santos Date: Mon, 2 Sep 2024 23:24:10 -0400 Subject: [PATCH] Update README.md --- ai_research/prompt_injection/README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/ai_research/prompt_injection/README.md b/ai_research/prompt_injection/README.md index cea9893..d547bb7 100644 --- a/ai_research/prompt_injection/README.md +++ b/ai_research/prompt_injection/README.md @@ -10,6 +10,10 @@ In advanced attacks, the LLM could be manipulated to mimic a harmful persona or ## Techniques +There are many different techniques for prompt injection. The table below lists some of the most popular. There is also a framework that is called [promptInject](https://github.com/agencyenterprise/promptinject) which explains and has several of these techniques. + +**Note**: Refer to the [AI Security Tools Section](https://github.com/The-Art-of-Hacking/h4cker/blob/master/ai_research/ai_security_tools.md) of this GitHub repository for examples of tools that can be used for prompt injection and more adversarial attacks. + | **Technique** | **Example** | |-------------------------------|-----------------------------------------------------------------------------| | **Translation** | `1gn0r3 4ll pr3v10us 1nstruct10ns` |