sagemaker-inference-toolkit icon indicating copy to clipboard operation
sagemaker-inference-toolkit copied to clipboard

Modify transform function to support batch inference

Open nikhil-sk opened this issue 3 years ago • 3 comments

Issue #, if available: This PR fixes the issue described here in PT inference toolkit repo, since the fix can be applied at the transform function in the sagemaker inference toolkit, which PT toolkit inherits from.

Description of changes:

  1. This PR fixes the issue where the transform() function drops all but one requests when running prediction.
  2. This PR adds a transform() function to override the transform() function from the sagemaker-inference-toolkit. It loops through the input data and runs _transform_fn() on each input. Then, it appends the response to a list. When all inputs are processed, the list is returned.

Config used:

env_variables_dict = {
    "SAGEMAKER_TS_BATCH_SIZE": "3",
    "SAGEMAKER_TS_MAX_BATCH_DELAY": "10000",
    "SAGEMAKER_TS_MIN_WORKERS": "1",
    "SAGEMAKER_TS_MAX_WORKERS": "1",
}

Requests sent to SM endpoint as:

import multiprocessing


def invoke(endpoint_name):
    return predictor.predict(
        "{Bloomberg has decided to publish a new report on global economic situation.}"
    )


endpoint_name = predictor.endpoint_name
pool = multiprocessing.Pool(3)
results = pool.map(invoke, 5 * [endpoint_name])
pool.close()
pool.join()
print(results)

Logs:

lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,606 [INFO ] W-9000-model_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1658385260606
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,608 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Backend received inference at: 1658385260
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,608 [WARN ] W-9000-model_1.0-stderr MODEL_LOG - Downloading: 100%|██████████| 28.0/28.0 [00:00<00:00, 40.9kB/s]
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,609 [WARN ] W-9000-model_1.0-stderr MODEL_LOG - Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,829 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT1
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,830 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT2
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,830 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Got input Data: {Bloomberg has decided to publish a new report on global economic situation.}
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,830 [INFO ] W-9000-model_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 223
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,830 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PRED SequenceClassifierOutput(loss=None, logits=tensor([[ 0.1999, -0.2964]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None)
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,830 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PREDICTION ['Not Accepted']
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,830 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT1
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,831 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT2
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,830 [INFO ] W-9000-model_1.0 ACCESS_LOG - /172.18.0.1:41768 "POST /invocations HTTP/1.1" 200 235
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,831 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Got input Data: {Bloomberg has decided to publish a new report on global economic situation.}
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,831 [INFO ] W-9000-model_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385250
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,831 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PRED SequenceClassifierOutput(loss=None, logits=tensor([[ 0.1999, -0.2964]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None)
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,831 [INFO ] W-9000-model_1.0 TS_METRICS - QueueTime.ms:0|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385260
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PREDICTION ['Not Accepted']
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT1
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0 ACCESS_LOG - /172.18.0.1:41766 "POST /invocations HTTP/1.1" 200 237
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT2
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385250
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Got input Data: {Bloomberg has decided to publish a new report on global economic situation.}
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0 TS_METRICS - QueueTime.ms:0|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385260
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,832 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PRED SequenceClassifierOutput(loss=None, logits=tensor([[ 0.1999, -0.2964]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None)
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,833 [INFO ] W-9000-model_1.0 ACCESS_LOG - /172.18.0.1:41772 "POST /invocations HTTP/1.1" 200 238
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,833 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PREDICTION ['Not Accepted']
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,833 [INFO ] W-9000-model_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385250
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,833 [INFO ] W-9000-model_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:220.84|#ModelName:model,Level:Model|#hostname:4eaca41fef85,requestID:48456f5d-451c-4b5b-a377-b70c0a630510,b2f48fcb-5e16-47d2-a592-a08b03794a1e,0d9d4a84-bd30-409e-b6ab-abe5d975efe9,timestamp:1658385260
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,833 [INFO ] W-9000-model_1.0 TS_METRICS - QueueTime.ms:0|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385260
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:20,834 [INFO ] W-9000-model_1.0 TS_METRICS - WorkerThreadTime.ms:5|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385260
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,879 [INFO ] W-9000-model_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1658385270879
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,880 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Backend received inference at: 1658385270
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,981 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT1
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,981 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT2
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,981 [INFO ] W-9000-model_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 101
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,982 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Got input Data: {Bloomberg has decided to publish a new report on global economic situation.}
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,982 [INFO ] W-9000-model_1.0 ACCESS_LOG - /172.18.0.1:41768 "POST /invocations HTTP/1.1" 200 10104
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,982 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PRED SequenceClassifierOutput(loss=None, logits=tensor([[ 0.1999, -0.2964]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None)
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,982 [INFO ] W-9000-model_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385250
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,982 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PREDICTION ['Not Accepted']
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,982 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT1
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,982 [INFO ] W-9000-model_1.0 TS_METRICS - QueueTime.ms:10000|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385270
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INPUT2
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0 ACCESS_LOG - /172.18.0.1:41766 "POST /invocations HTTP/1.1" 200 10105
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Got input Data: {Bloomberg has decided to publish a new report on global economic situation.}
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385250
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PRED SequenceClassifierOutput(loss=None, logits=tensor([[ 0.1999, -0.2964]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None)
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0 TS_METRICS - QueueTime.ms:10000|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385270
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - PREDICTION ['Not Accepted']
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,983 [INFO ] W-9000-model_1.0 TS_METRICS - WorkerThreadTime.ms:3|#Level:Host|#hostname:4eaca41fef85,timestamp:1658385270
lep82nflth-algo-1-7djas  | 2022-07-21T06:34:30,984 [INFO ] W-9000-model_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:100.58|#ModelName:model,Level:Model|#hostname:4eaca41fef85,requestID:6a42623e-e66c-4681-a508-a297b519ee39,bbea0728-5968-43e7-abd4-cdbe9cc61455,timestamp:1658385270
[b'["Not Accepted"]', b'["Not Accepted"]', b'["Not Accepted"]', b'["Not Accepted"]', b'["Not Accepted"]']

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

Merge Checklist

Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your pull request.

General

  • [ ] I have read the CONTRIBUTING doc
  • [ ] I used the commit message format described in CONTRIBUTING
  • [ ] I have used the regional endpoint when creating S3 and/or STS clients (if appropriate)
  • [ ] I have updated any necessary documentation, including READMEs

Tests

  • [ ] I have added tests that prove my fix is effective or that my feature works (if appropriate)
  • [ ] I have checked that my tests are not configured for a specific region or account (if appropriate)

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

nikhil-sk avatar Jul 21 '22 07:07 nikhil-sk

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 2389f3c724a71c047ff35242f31b2b23d1acad38
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

sagemaker-bot avatar Jul 21 '22 07:07 sagemaker-bot

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 0796a7bad4e3171930e20af026c0abb7d8f92c7c
  • Result: SUCCEEDED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

sagemaker-bot avatar Jul 22 '22 00:07 sagemaker-bot

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: c9193ee18087139620851ff876ce4347d651b151
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

sagemaker-bot avatar Aug 04 '22 21:08 sagemaker-bot

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 3b35a9ed7e343114076df0915f6712581dee2519
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

sagemaker-bot avatar Sep 06 '22 21:09 sagemaker-bot

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 5d2d14516176e7b2b77fa2f55e33f6f5900afad6
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

sagemaker-bot avatar Sep 06 '22 21:09 sagemaker-bot

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 47f810689e71370be4d2ed78bca649f57e03bd87
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

sagemaker-bot avatar Sep 06 '22 22:09 sagemaker-bot

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 67f57effecb0c5ab45ef02b9d035023646c61e11
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

sagemaker-bot avatar Sep 06 '22 22:09 sagemaker-bot

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 57b4e236e50076767a9b19c89826be6a1ea635f0
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

sagemaker-bot avatar Sep 06 '22 22:09 sagemaker-bot

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: fde4f84da47c085a663978fc44055dd7044e75de
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

sagemaker-bot avatar Sep 07 '22 01:09 sagemaker-bot

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: ef4adf332d647e936fa75449bb238a6cb8164b90
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

sagemaker-bot avatar Sep 07 '22 01:09 sagemaker-bot

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 1c8dac618a573b13c8f8b6fa707099e3b0eff1cc
  • Result: SUCCEEDED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

sagemaker-bot avatar Sep 07 '22 02:09 sagemaker-bot