llm-security
llm-security copied to clipboard
New ways of breaking app-integrated LLMs
A scenario which models the situation described [here](https://github.com/velocitatem/llm-cross-prompt-scripting/tree/main/playground) Might be a good addition?
 May I ask how do you inject such hints in the figure into LLM?
Line 140 in the README.md has the following spelling error: `sceanrios/main.py`. Should be `scenarios/main.py`. https://github.com/greshake/llm-security/blob/87f4b7ffa568b7261a79b31573068d8113319212/README.md?plain=1#L140