Python: Content Filtering Error lacks result
Bug description
When invoking a semantic function with a context that contains an offensive message (e.g., "I hate <some group>") we get an inner error that doesn't contain a content_filter_result attribute.
But when I use Azure OpenAI “use your own data” REST API with the same offensive message the error contains content_filter_result:
{'error': {'message': "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766", 'type': None, 'param': 'prompt', 'code': 'content_filter', 'status': 400, 'innererror': {'code': 'ResponsibleAIPolicyViolation', 'content_filter_result': {'hate': {'filtered': True, 'severity': 'high'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'low'}}}}}
See OpenAI API reference for "inner_error" property
Maybe this value gets lost in the way.
Expected behavior
content_filter_result should be a non-empty field, with the attributes 'hate', 'self_harm', etc.
Platform Using Python's semantic-kernel version: 0.4.7.dev0
When invoking a semantic function with a context that contains an offensive message (e.g., "I hate
") we get an inner error that doesn't contain a content_filter_resultattribute.
Hi @Noam-Microsoft, when you mention the following above, is the semantic function run against Azure OpenAI or OpenAI?
Hi @moonbox3, thank you for your reply.
The kernel I used to invoke the semantic function in example was initialized with:
base_url = https://<some-resource-name>.openai.azure.com
So I assume it's an Azure endpoint.
Thanks for the added information, @Noam-Microsoft. The Azure OpenAI API without OYD should indeed return the content_filter_result if present. We'll investigate.
@Noam-Microsoft we're having trouble reproducing it on our end. With some sampe code, we are seeing the Content Filter Result. Would you be able to give us some sample code that can reproduce the issue? Thanks for your help.
Yes.
I use a kernel that was initialized with gpt-35-turbo model and Azure OpenAI endpoint.
I set history_str to be 'user: I hate <some group>'
Then I run this code:
llm_response = await self.semantic_function.invoke(input=history_str)
When I watch llm_response.last_exception.content_filter_result I see an empty dict
It was reprouced in both 0.4.7.dev0 and 0.5.0.dev0 versions
Hi @moonbox3, could you please let me know if there's an update from your side? Thanks!
@juliomenendez do you have any thoughts? Thanks!
@moonbox3, @juliomenendez, would appreciate your response.
@moonbox3, you said you couldn't reproduce it on your end, does it mean you where able to see a result that is a non-empty dict? Could you give me some repro steps or code sample?
Thanks
This is the sample code we ran to try and repro the issue:
import asyncio
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion
from semantic_kernel.connectors.ai.open_ai.exceptions.content_filter_ai_exception import ContentFilterAIException
from semantic_kernel.connectors.ai.open_ai.prompt_execution_settings.azure_chat_prompt_execution_settings import AzureChatPromptExecutionSettings
from semantic_kernel.utils.settings import azure_openai_settings_from_dot_env_as_dict
async def main():
kernel = sk.Kernel()
aoai_settings_dict = azure_openai_settings_from_dot_env_as_dict(
include_deployment=True, include_api_version=True
)
azure_text_service = AzureChatCompletion(**aoai_settings_dict)
kernel.add_chat_service("dv", azure_text_service)
prompt = """Respond with your best knowledge to the following question:
{{$input}}
"""
kernel.create_semantic_function(
prompt_template=prompt, max_tokens=2000, temperature=0.2, top_p=0.5
)
prompt = "I hate <redacted>"
messages = [{"role": "user", "content": prompt}]
try:
await azure_text_service.complete_chat(
messages, AzureChatPromptExecutionSettings()
)
except ContentFilterAIException as ex:
print('Param', ex.param)
print('Content filter code', ex.content_filter_code)
print('Content filter result', ex.content_filter_result)
except Exception as ex:
print(ex)
if __name__ == "__main__":
asyncio.run(main())
results in:
Let me find out from @juliomenendez which model/deployment/api version he ran this against.
I ran it against Azure OpenAI GPT 3.5 Turbo:
Properties:
Model name: gpt-35-turbo
Model version: 0301
Version update policy: Once a new default version is available.
Deployment type: Standard
Content Filter: Default
Tokens per Minute Rate Limit (thousands): 120
Rate limit (Tokens per minute): 120000
Rate limit (Requests per minute): 720
And got this response:
Thank you both for the reply. I ran your script with the same input: "I hate <redacted>" (where <redacted> was replaced by a real value). Content filtering was indeed triggered but without result:
The only difference I made in the script is the way I initialized the AzureChatCompletion:
def construct_openai_endpoint(base_url: str, deployment_name: str) -> str:
return f"{base_url}/openai/deployments/{deployment_name}/"
azure_text_service = AzureChatCompletion(
deployment_name="gpt-35-turbo",
base_url=construct_openai_endpoint(
"https://<some_endpoint>.openai.azure.com", "gpt-35-turbo"
),
api_key="<some_key>",
),
)
Did you use an Azure OpenAI model? Can you give me example of the values you have in the .env file?
@Noam-Microsoft apologies on the extreme delay here. Are you still experiencing problems?
Closing. Please re-open if you continue to experience issues.