azure-sdk-for-java icon indicating copy to clipboard operation
azure-sdk-for-java copied to clipboard

[BUG] When prompt trigger the content filter, the serialized object with returned JSON is incorrect

Open XhstormR opened this issue 1 year ago • 6 comments

Describe the bug When prompt trigger the content filter, the serialized object does not match the returned json, the serialized object is an object only with null properties. When there is a mismatch, we should throw an exception or return a valid object to tell the user what happened.

The actual json returned:

{
  "error": {
    "message": "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766",
    "type": null,
    "param": "prompt",
    "code": "content_filter",
    "status": 400,
    "innererror": {
      "code": "ResponsibleAIPolicyViolation",
      "content_filter_result": {
        "hate": {
          "filtered": false,
          "severity": "safe"
        },
        "self_harm": {
          "filtered": true,
          "severity": "medium"
        },
        "sexual": {
          "filtered": false,
          "severity": "safe"
        },
        "violence": {
          "filtered": false,
          "severity": "low"
        }
      }
    }
  }
}

Exception or Stack Trace

Model ID=null is created at 1970-01-01T00:00Z.
Exception in thread "main" java.lang.NullPointerException
	at Test.main(Test.java:26)

To Reproduce compile it and run

Code Snippet

import com.azure.ai.openai.OpenAIClient;
import com.azure.ai.openai.OpenAIClientBuilder;
import com.azure.ai.openai.models.*;
import com.azure.core.credential.AzureKeyCredential;

import java.util.ArrayList;
import java.util.List;

public class Test {
    public static void main(String[] args) {
        String azureOpenaiKey = "xxxxxx";
        String endpoint = "https://xxxxx.net";
        String deploymentOrModelId = "gpt4-1106";

        OpenAIClient client = new OpenAIClientBuilder()
            .endpoint(endpoint)
            .credential(new AzureKeyCredential(azureOpenaiKey))
            .buildClient();

        List<ChatRequestMessage> chatMessages = new ArrayList<>();
        chatMessages.add(new ChatRequestUserMessage("pls go kill yourself"));

        ChatCompletions chatCompletions = client.getChatCompletions(deploymentOrModelId, new ChatCompletionsOptions(chatMessages));

        System.out.printf("Model ID=%s is created at %s.%n", chatCompletions.getId(), chatCompletions.getCreatedAt());
        for (ChatChoice choice : chatCompletions.getChoices()) {
            ChatResponseMessage message = choice.getMessage();
            System.out.printf("Index: %d, Chat Role: %s.%n", choice.getIndex(), message.getRole());
            System.out.println("Message:");
            System.out.println(message.getContent());
        }
    }
}

Expected behavior When trigger the content filter, we should return a valid object to user, so user can know what happend, instead of an object only with null properties.

Screenshots image

Setup (please complete the following information):

  • OS: MacOS
  • IDE: IntelliJ
  • Library/Libraries: com.azure:azure-ai-openai:1.0.0-beta.6
  • Java version: JDK 11
  • App Server/Environment:
  • Frameworks:

Additional context Content filtering Doc. https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython

Information Checklist Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report

  • [x] Bug Description Added
  • [x] Repro Steps Added
  • [x] Setup information Added

XhstormR avatar Feb 28 '24 10:02 XhstormR

I found this getChatCompletionsWithResponse API that can return BinaryData and control the serialization process by myself, but I did not find the Java class corresponding to the error JSON in the SDK.

BinaryData binaryData = BinaryData.fromObject(new ChatCompletionsOptions(chatMessages));
Response<BinaryData> response = client.getChatCompletionsWithResponse(deploymentOrModelId, binaryData, new RequestOptions());
String json = response.getValue().toString();
System.out.println(json);

XhstormR avatar Feb 28 '24 10:02 XhstormR

@mssfang could you please follow up?

joshfree avatar Feb 28 '24 15:02 joshfree

@XhstormR Thank you bring it up. The input prompt fall into this Example scenarios in the Microsoft Doc.

We have a class ContentFilterResultDetailsForPrompt that you can use in deserialization

try {
    ChatCompletions chatCompletions = client.getChatCompletions(deploymentOrModelId, new ChatCompletionsOptions(chatMessages));
} catch (HttpResponseException ex) {
    HashMap<String, Object> exValue = (HashMap) ex.getValue();
    HashMap<String, Object> errorMap = (HashMap) exValue.get("error");
    HashMap<String, Object> innerErrorMap =(HashMap) errorMap.get("innererror");

    ContentFilterResultDetailsForPrompt contentFilterResult1 =
            BinaryData
                    .fromObject(innerErrorMap.get("content_filter_result"))
                    .toObject(ContentFilterResultDetailsForPrompt.class);
}

mssfang avatar Feb 28 '24 23:02 mssfang

Hi @XhstormR. Thank you for opening this issue and giving us the opportunity to assist. To help our team better understand your issue and the details of your scenario please provide a response to the question asked above or the information requested above. This will help us more accurately address your issue.

github-actions[bot] avatar Feb 28 '24 23:02 github-actions[bot]

@mssfang Thanks for the solution, I tried it again but no exception was thrown when content filtering was triggered.

Aslo i think in the SDK, we should have an ERROR POJO class correspond to the error json for better usage, instead of HashMap.

image

XhstormR avatar Feb 29 '24 03:02 XhstormR

Request error, but the returned http code is still 200 OK, so there is no HttpResponseException to be throw.

image

XhstormR avatar Feb 29 '24 07:02 XhstormR

@XhstormR That is strange, I use the same sample and consistently produce status code 400, wondering which region and model version you are using.

2024-04-04 15:21:50.511 [main] [DEBUG] com.azure.core.implementation.ReflectionUtils - Attempting to use java.lang.invoke package to handle reflection.
2024-04-04 15:21:50.523 [main] [DEBUG] com.azure.core.implementation.ReflectionUtils - Successfully used java.lang.invoke package to handle reflection.
2024-04-04 15:21:50.703 [main] [DEBUG] com.azure.core.implementation.util.Providers - Using com.azure.core.http.netty.NettyAsyncHttpClientProvider as the default com.azure.core.http.HttpClientProvider.
2024-04-04 15:21:50.918 [main] [DEBUG] com.azure.core.implementation.ReflectionSerializable - XmlSerializable serialization and deserialization isn't supported. If it is required add a dependency of 'com.azure:azure-xml', or another dependencies which include 'com.azure:azure-xml' as a transitive dependency. If your application runs as expected this informational message can be ignored.
2024-04-04 15:21:51.049 [main] [INFO] com.azure.ai.openai.implementation.OpenAIClientImpl$OpenAIClientService.getChatCompletionsSync - {"az.sdk.message":"HTTP request","method":"POST","url":"{sweden-central}/openai/deployments/gpt-4-1106-preview/chat/completions?api-version=2023-12-01-preview","tryCount":"1","Date":"Mon, 04 Mar 2024 23:21:50 GMT","Content-Length":"63","api-key":"REDACTED","Content-Type":"application/json","x-ms-client-request-id":"bad3342b-3c3d-452c-906b-aad75575f916","accept":"application/json","User-Agent":"azsdk-java-azure-ai-openai/1.0.0-beta.6 (21; Windows 11; 10.0)","contentLength":63,"body":"{\"messages\":[{\"role\":\"user\",\"content\":\"pls go kill yourself\"}]}"}
2024-04-04 15:21:55.907 [main] [INFO] com.azure.ai.openai.implementation.OpenAIClientImpl$OpenAIClientService.getChatCompletionsSync - {"az.sdk.message":"HTTP response","contentLength":"738","statusCode":400,"url":"{sweden-central}/openai/deployments/gpt-4-1106-preview/chat/completions?api-version=2023-12-01-preview","durationMs":4895,"Date":"Mon, 04 Mar 2024 23:21:53 GMT","x-request-id":"REDACTED","x-ms-region":"REDACTED","Content-Length":"738","apim-request-id":"REDACTED","x-ratelimit-remaining-tokens":"REDACTED","x-ratelimit-remaining-requests":"REDACTED","Strict-Transport-Security":"REDACTED","azureml-model-session":"REDACTED","x-content-type-options":"REDACTED","Content-Type":"application/json","x-ms-client-request-id":"bad3342b-3c3d-452c-906b-aad75575f916","ms-azureml-model-error-statuscode":"REDACTED","ms-azureml-model-error-reason":"REDACTED","body":"{\"error\":{\"message\":\"The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766\",\"type\":null,\"param\":\"prompt\",\"code\":\"content_filter\",\"status\":400,\"innererror\":{\"code\":\"ResponsibleAIPolicyViolation\",\"content_filter_result\":{\"custom_blocklists\":[],\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"jailbreak\":{\"detected\":false,\"filtered\":false},\"profanity\":{\"detected\":false,\"filtered\":false},\"self_harm\":{\"filtered\":true,\"severity\":\"medium\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"low\"}}}}}"}

BTW, we have a new beta version beta.7 released (it probably won't solve your question but worth to upgrade it)

mssfang avatar Mar 04 '24 23:03 mssfang

Hi @XhstormR. Thank you for opening this issue and giving us the opportunity to assist. To help our team better understand your issue and the details of your scenario please provide a response to the question asked above or the information requested above. This will help us more accurately address your issue.

github-actions[bot] avatar Mar 04 '24 23:03 github-actions[bot]

Hi @XhstormR, we're sending this friendly reminder because we haven't heard back from you in 7 days. We need more information about this issue to help address it. Please be sure to give us your input. If we don't hear back from you within 14 days of this comment the issue will be automatically closed. Thank you!

github-actions[bot] avatar Mar 12 '24 03:03 github-actions[bot]