injection location
May I ask how do you inject such hints in the figure into LLM?
In the paper we were using Bing's ability to read web pages the user is visiting when using MS Edge. There are plenty of ways to smuggle text into the context of common LLM apps.
Thank you for your answer. So the "system" here does not refer to the "system prompt" in the API interface, right?
look forward to your answer
The "System" text was part of the externally injected message that was supposed to look to the LLM like a system instruction. In the input rendered to the LLM, the real system prompt would appear first and then the injected system prompt somewhere below that.
Sorry for the delayed response!