diff --git a/ai_research/prompt_injection/README.md b/ai_research/prompt_injection/README.md index cea9893..d547bb7 100644 --- a/ai_research/prompt_injection/README.md +++ b/ai_research/prompt_injection/README.md @@ -10,6 +10,10 @@ In advanced attacks, the LLM could be manipulated to mimic a harmful persona or ## Techniques +There are many different techniques for prompt injection. The table below lists some of the most popular. There is also a framework that is called [promptInject](https://github.com/agencyenterprise/promptinject) which explains and has several of these techniques. + +**Note**: Refer to the [AI Security Tools Section](https://github.com/The-Art-of-Hacking/h4cker/blob/master/ai_research/ai_security_tools.md) of this GitHub repository for examples of tools that can be used for prompt injection and more adversarial attacks. + | **Technique** | **Example** | |-------------------------------|-----------------------------------------------------------------------------| | **Translation** | `1gn0r3 4ll pr3v10us 1nstruct10ns` |