semantic-kernel icon indicating copy to clipboard operation
semantic-kernel copied to clipboard

Python: Content Filtering Error lacks result

Open Noam-Microsoft opened this issue 2 years ago • 12 comments

Bug description When invoking a semantic function with a context that contains an offensive message (e.g., "I hate <some group>") we get an inner error that doesn't contain a content_filter_result attribute.

But when I use Azure OpenAI “use your own data” REST API with the same offensive message the error contains content_filter_result:

{'error': {'message': "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766", 'type': None, 'param': 'prompt', 'code': 'content_filter', 'status': 400, 'innererror': {'code': 'ResponsibleAIPolicyViolation', 'content_filter_result': {'hate': {'filtered': True, 'severity': 'high'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'low'}}}}} See OpenAI API reference for "inner_error" property

Maybe this value gets lost in the way.

Expected behavior content_filter_result should be a non-empty field, with the attributes 'hate', 'self_harm', etc.

Platform Using Python's semantic-kernel version: 0.4.7.dev0

Noam-Microsoft avatar Jan 31 '24 17:01 Noam-Microsoft

When invoking a semantic function with a context that contains an offensive message (e.g., "I hate ") we get an inner error that doesn't contain a content_filter_result attribute.

Hi @Noam-Microsoft, when you mention the following above, is the semantic function run against Azure OpenAI or OpenAI?

moonbox3 avatar Jan 31 '24 19:01 moonbox3

Hi @moonbox3, thank you for your reply. The kernel I used to invoke the semantic function in example was initialized with: base_url = https://<some-resource-name>.openai.azure.com So I assume it's an Azure endpoint.

Noam-Microsoft avatar Feb 01 '24 08:02 Noam-Microsoft

Thanks for the added information, @Noam-Microsoft. The Azure OpenAI API without OYD should indeed return the content_filter_result if present. We'll investigate.

moonbox3 avatar Feb 01 '24 21:02 moonbox3

@Noam-Microsoft we're having trouble reproducing it on our end. With some sampe code, we are seeing the Content Filter Result. Would you be able to give us some sample code that can reproduce the issue? Thanks for your help.

moonbox3 avatar Feb 01 '24 22:02 moonbox3

Yes. I use a kernel that was initialized with gpt-35-turbo model and Azure OpenAI endpoint. I set history_str to be 'user: I hate <some group>' Then I run this code: llm_response = await self.semantic_function.invoke(input=history_str)

When I watch llm_response.last_exception.content_filter_result I see an empty dict image

It was reprouced in both 0.4.7.dev0 and 0.5.0.dev0 versions

Noam-Microsoft avatar Feb 06 '24 13:02 Noam-Microsoft

Hi @moonbox3, could you please let me know if there's an update from your side? Thanks!

Noam-Microsoft avatar Feb 12 '24 12:02 Noam-Microsoft

@juliomenendez do you have any thoughts? Thanks!

moonbox3 avatar Feb 12 '24 16:02 moonbox3

@moonbox3, @juliomenendez, would appreciate your response.

@moonbox3, you said you couldn't reproduce it on your end, does it mean you where able to see a result that is a non-empty dict? Could you give me some repro steps or code sample?

Thanks

Noam-Microsoft avatar Feb 20 '24 09:02 Noam-Microsoft

This is the sample code we ran to try and repro the issue:

import asyncio

import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion
from semantic_kernel.connectors.ai.open_ai.exceptions.content_filter_ai_exception import ContentFilterAIException
from semantic_kernel.connectors.ai.open_ai.prompt_execution_settings.azure_chat_prompt_execution_settings import AzureChatPromptExecutionSettings
from semantic_kernel.utils.settings import azure_openai_settings_from_dot_env_as_dict

async def main():
    kernel = sk.Kernel()

    aoai_settings_dict = azure_openai_settings_from_dot_env_as_dict(
        include_deployment=True, include_api_version=True
    )

    azure_text_service = AzureChatCompletion(**aoai_settings_dict)
    kernel.add_chat_service("dv", azure_text_service)

    prompt = """Respond with your best knowledge to the following question:
    {{$input}}
    """

    kernel.create_semantic_function(
        prompt_template=prompt, max_tokens=2000, temperature=0.2, top_p=0.5
    )

    prompt = "I hate <redacted>"
    messages = [{"role": "user", "content": prompt}]

    try:
        await azure_text_service.complete_chat(
            messages, AzureChatPromptExecutionSettings()
        )
    except ContentFilterAIException as ex:
        print('Param', ex.param)
        print('Content filter code', ex.content_filter_code)
        print('Content filter result', ex.content_filter_result)
    except Exception as ex:
        print(ex)


if __name__ == "__main__":
    asyncio.run(main())

results in:

image

Let me find out from @juliomenendez which model/deployment/api version he ran this against.

moonbox3 avatar Feb 20 '24 14:02 moonbox3

I ran it against Azure OpenAI GPT 3.5 Turbo:

Properties:
Model name: gpt-35-turbo
Model version: 0301
Version update policy: Once a new default version is available.
Deployment type: Standard
Content Filter: Default
Tokens per Minute Rate Limit (thousands): 120
Rate limit (Tokens per minute): 120000
Rate limit (Requests per minute): 720

And got this response: image

juliomenendez avatar Feb 20 '24 14:02 juliomenendez

Thank you both for the reply. I ran your script with the same input: "I hate <redacted>" (where <redacted> was replaced by a real value). Content filtering was indeed triggered but without result:

image

The only difference I made in the script is the way I initialized the AzureChatCompletion:

def construct_openai_endpoint(base_url: str, deployment_name: str) -> str:
    return f"{base_url}/openai/deployments/{deployment_name}/"

azure_text_service = AzureChatCompletion(
        deployment_name="gpt-35-turbo",
        base_url=construct_openai_endpoint(
            "https://<some_endpoint>.openai.azure.com", "gpt-35-turbo"
        ),
        api_key="<some_key>",
    ),
)

Did you use an Azure OpenAI model? Can you give me example of the values you have in the .env file?

Noam-Microsoft avatar Feb 21 '24 14:02 Noam-Microsoft

@Noam-Microsoft apologies on the extreme delay here. Are you still experiencing problems?

moonbox3 avatar May 03 '24 16:05 moonbox3

Closing. Please re-open if you continue to experience issues.

moonbox3 avatar Jun 11 '24 19:06 moonbox3