io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
Hello team,
I'm unable to understand this error, as it is being thrown after completion of the entire workflow!!
Something like this (logs are from docker logs command):
pygsl_prod_service.0.q3shploty9kt@nav0789 | 2024-08-05T16:52:12,397 [INFO ] wf-manager-thread-0 ACCESS_LOG - /10.0.0.2:46572 "POST /wfpredict/check_extract HTTP/1.1" 200 930926
pygsl_prod_service.0.q3shploty9kt@nav0789 | 2024-08-05T16:52:12,398 [INFO ] wf-manager-thread-0 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:Unknown,timestamp:1722856932
pygsl_prod_service.0.q3shploty9kt@nav0789 | 2024-08-05T16:52:12,402 [ERROR] epollEventLoopGroup-3-2 org.pytorch.serve.http.HttpRequestHandler -
pygsl_prod_service.0.q3shploty9kt@nav0789 | io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
- Question:
- Is this error because of the client API call is not waiting for response? (The timeout on client end is
30 minutesand usually a request takes around 12-15 minutes.) - What should be the correct values for
number_of_netty_threads,netty_client_threads,job_queue_size. I set these values after looking at this comment
TS config:
- model-config.yaml
maxBatchDelay: 10000000
responseTimeout: 10000000
- workflow.yaml
models:
min-workers: 1
max-workers: 2
max-batch-delay: 10000000
retry-attempts: 1
timeout-ms: 3000000
m1:
url: mode-1.mar
m2:
url: model-2.mar
m3:
url: model-3.mar
m4:
url: model-4.mar
m5:
url: model-5.mar
dag:
pre_processing: [m1]
m1: [m2]
m2: [m3]
m3: [m4]
m4: [m5]
m5: [post_processing]
- config.properties
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081
metrics_address=http://0.0.0.0:8082
# management
number_of_netty_threads=16 # default count shown is 0, so I added this to get the solution of the connection error.
netty_client_threads=16 # default count shown is 0, so I added this to get the solution of the connection error.
default_response_timeout=10000000
# default_workers_per_model=2
max_request_size=65535000
max_response_size=65535000
load_models=
model_store=model_store
workflow_store=wf_store
enable_envvars_config=true
disable_system_metrics=true
job_queue_size=1
- Python Packages:
torch==1.13.1+cu117
torchvision==0.14.1+cu117
torchaudio==0.13.1+cu117
torchserve==0.10.0
torch-model-archiver==0.10.0
torch-workflow-archiver==0.2.12
nvgpu==0.10.0
captum==0.7.0
Appreciate your help guys! Thanks.
Hi @KD1994 looking at your config I first thought it might be the high maxBatchDelay you selected. The maxBatchDelay can be seen as a time window for which the server waits to collect (and then batch) multiple requests. Its given in ms which means your server will wait 166 minutes (or utill the batch is full) before it starts processing. But if you did not select a batch_size != 1 somewhere else its default should be =1 and the processing will start as soon as the first request comes in.
Can you give a bit more context on your environment? Is this a test setup where you send a single request? Is this happening during a load test with multiple requests? Under normal (not extreme load cases) the netty threads should not be of any concern and can be left at their default values.
I also noticed that you set selected job_queue_size =1 which means you're only queueing a single job before the server will start responding with HttpResponseStatus.SERVICE_UNAVAILABLE. Better to increase this value to let let some requests be queued.
Hello @mreso.
Thanks for the response.
let me provide some more details here:
looking at your config I first thought it might be the high maxBatchDelay you selected. The maxBatchDelay can be seen as a time window for which the server waits to collect (and then batch) multiple requests. Its given in ms which means your server will wait 166 minutes (or utill the batch is full) before it starts processing. But if you did not select a batch_size != 1 somewhere else its default should be =1 and the processing will start as soon as the first request comes in.
- Yes, batch size is 1.
Can you give a bit more context on your environment?
- Yes, it is as you assumed is test setup. but my test setup uses docker swarm as I need to scale the workflow and TorchServe doesn't have this functionality yet.
- My Docker Swarm Config:
- 3 nodes, 1 manager 2 workers
- Manager has 4 X v100 sxm-2, 32GB each, Worker has 4 X v100 sxm-2, 16GB each
Is this a test setup where you send a single request?
- Maximum of 2 requests are expected to come as we have 2 nodes to handle this. I've also set the
max-workers: 2in workflow YAML to handle at most 2 requests at a time on a single node (I hope I did not misunderstand and set it wrong).
Is this happening during a load test with multiple requests?
- It is happening after all the nodes performed their respective tasks and server is supposed to let the client know that request is successfully completed.
Under normal (not extreme load cases) the netty threads should not be of any concern and can be left at their default values.
- As for the netty thread, that was just my attempt to see if there are threads available to handle requests for the client, but as you said it is not necessary it seems.
I also noticed that you set selected job_queue_size =1 which means you're only queueing a single job before the server will start responding with HttpResponseStatus.SERVICE_UNAVAILABLE. Better to increase this value to let let some requests be queued.
-
job_queue_size =1I kept it because I thought if I increase this number then server will queue up that many requests. So, if size is 3 and client sent multiple requests which takes around 12-15 mins to process then eventually there will be timeout (30 mins as mentioned earlier) for the last request which didn't even get the chance get started. Please correct me if I'm wrong here.
Hi @KD1994 so you mean this error occurs after the 12-15 minutes processing time? As in your log it seems like the ACCESS and ERROR pretty much appear within the same second. Is this the first request and it fails or are you able to issue some successful request and then this happens?
If you could provide more logs that could also be helpful.
Found this issue #2201 which seems to show the same error though it seems to points to a client issue.
Found this issue https://github.com/pytorch/serve/issues/2201 which seems to show the same error though it seems to points to a client issue.
- I've gone through this one, and then opened this issue, as my client has set the timeout 30 mins! and none of my requests take more than that I believe.
you mean this error occurs after the 12-15 minutes processing time?
- Yes, you are right, and it is pretty random!
Is this the first request and it fails or are you able to issue some successful request and then this happens?
- It successfully processes few requests and then suddenly it fails.
- This table below shows the requests being handled and their completion status, all the failed requests are actually completed but due to this connection error, it is not informing the client for the success, and it waits for client timeout (30 mins) and then client will mark it as failed.
Logs where I encountered this issue:
2024-08-07T12:33:52,778 [INFO ] epollEventLoopGroup-3-8 org.pytorch.serve.http.api.rest.InferenceRequestHandler - org.pytorch.serve.util.messages.RequestInput@17c57686
2024-08-07T12:33:52,781 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - pre_processing for attempt 0
2024-08-07T12:33:52,781 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723014232781
2024-08-07T12:33:52,782 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723014232782
2024-08-07T12:33:52,784 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723014232
2024-08-07T12:33:52,861 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input: /var/data/CHP_HOME/input7/input.txt
2024-08-07T12:33:52,861 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Output: /var/data/CHP_HOME/output7/06ef95668d2446be8d8ccc3762235811
2024-08-07T12:33:52,862 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input file read successfully!
2024-08-07T12:33:52,862 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.077548
2024-08-07T12:33:52,862 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:77.6|#ModelName:check_extract__pre_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723014232,84794966-e94a-41c2-81df-6ff0a37cbdf6, pattern=[METRICS]
2024-08-07T12:33:52,863 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:77.6|#ModelName:check_extract__pre_processing,Level:Model|#hostname:923a5f450f81,requestID:84794966-e94a-41c2-81df-6ff0a37cbdf6,timestamp:1723014232
2024-08-07T12:33:52,917 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:135971.025|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723014232
2024-08-07T12:33:52,918 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:396.364|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723014232
2024-08-07T12:33:52,918 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 396364, Backend time ns: 136360551
2024-08-07T12:33:52,918 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723014232
2024-08-07T12:33:52,918 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 135
2024-08-07T12:33:52,918 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:Unknown,timestamp:1723014232
2024-08-07T12:33:52,921 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m1 for attempt 0
2024-08-07T12:33:52,922 [DEBUG] W-9001-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723014232922
2024-08-07T12:33:52,922 [INFO ] W-9001-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723014232922
2024-08-07T12:33:52,933 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_LOG - Backend received inference at: 1723014232
2024-08-07T12:33:56,081 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.399206
2024-08-07T12:33:56,081 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_LOG - Infer time: 0:00:02.747461
2024-08-07T12:33:56,081 [INFO ] W-9001-check_extract__m1_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:3147.16|#ModelName:check_extract__m1,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723014236,74c8c125-c2b5-48fe-839a-be2794097ed1, pattern=[METRICS]
2024-08-07T12:33:56,081 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_METRICS - PredictionTime.ms:3147.16|#ModelName:check_extract__m1,Level:Model|#hostname:923a5f450f81,requestID:74c8c125-c2b5-48fe-839a-be2794097ed1,timestamp:1723014236
2024-08-07T12:33:56,331 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:3409475.846|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723014236
2024-08-07T12:33:56,332 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:288.213|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723014236
2024-08-07T12:33:56,332 [DEBUG] W-9001-check_extract__m1_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 288213, Backend time ns: 3409561277
2024-08-07T12:33:56,332 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723014236
2024-08-07T12:33:56,332 [INFO ] W-9001-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3409
2024-08-07T12:33:56,332 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723014236
2024-08-07T12:33:56,355 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - m2 for attempt 0
2024-08-07T12:33:56,356 [DEBUG] W-9003-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723014236356
2024-08-07T12:33:56,356 [INFO ] W-9003-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723014236356
2024-08-07T12:33:56,444 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_LOG - Backend received inference at: 1723014236
2024-08-07T12:34:01,512 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.041858
2024-08-07T12:34:01,512 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_LOG - Infer time: 0:00:04.024746
2024-08-07T12:34:01,512 [INFO ] W-9003-check_extract__m2_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:5067.25|#ModelName:check_extract__m2,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723014241,85b80856-563f-4165-9385-a67a2d801c20, pattern=[METRICS]
2024-08-07T12:34:01,512 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_METRICS - PredictionTime.ms:5067.25|#ModelName:check_extract__m2,Level:Model|#hostname:923a5f450f81,requestID:85b80856-563f-4165-9385-a67a2d801c20,timestamp:1723014241
2024-08-07T12:34:01,771 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:5415725.121|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723014241
2024-08-07T12:34:01,772 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:351.034|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723014241
2024-08-07T12:34:01,778 [DEBUG] W-9003-check_extract__m2_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 351034, Backend time ns: 5416028085
2024-08-07T12:34:01,779 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723014241
2024-08-07T12:34:01,779 [INFO ] W-9003-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 5415
2024-08-07T12:34:01,779 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:8.0|#Level:Host|#hostname:Unknown,timestamp:1723014241
2024-08-07T12:34:01,792 [INFO ] wf-execute-thread-3 org.pytorch.serve.ensemble.DagExecutor - Invoking - m3 for attempt 0
2024-08-07T12:34:01,793 [DEBUG] W-9011-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723014241793
2024-08-07T12:34:01,793 [INFO ] W-9011-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723014241793
2024-08-07T12:34:01,865 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_LOG - Backend received inference at: 1723014241
2024-08-07T12:37:12,164 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.074033
2024-08-07T12:37:12,166 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_LOG - Infer time: 0:03:09.223484
2024-08-07T12:37:12,166 [INFO ] W-9011-check_extract__m3_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:190298.12|#ModelName:check_extract__m3,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723014432,3f523215-84f1-4d02-9ce6-7d7fe348aff4, pattern=[METRICS]
2024-08-07T12:37:12,168 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_METRICS - PredictionTime.ms:190298.12|#ModelName:check_extract__m3,Level:Model|#hostname:923a5f450f81,requestID:3f523215-84f1-4d02-9ce6-7d7fe348aff4,timestamp:1723014432
2024-08-07T12:37:12,438 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:1.90644945974E8|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723014432
2024-08-07T12:37:12,438 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:305.116|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723014432
2024-08-07T12:37:12,438 [DEBUG] W-9011-check_extract__m3_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 305116, Backend time ns: 190645631382
2024-08-07T12:37:12,439 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723014432
2024-08-07T12:37:12,439 [INFO ] W-9011-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 190644
2024-08-07T12:37:12,439 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:Unknown,timestamp:1723014432
2024-08-07T12:37:12,454 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - m4 for attempt 0
2024-08-07T12:37:12,455 [DEBUG] W-9009-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723014432455
2024-08-07T12:37:12,455 [INFO ] W-9009-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723014432455
2024-08-07T12:37:12,521 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_LOG - Backend received inference at: 1723014432
2024-08-07T12:38:00,646 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.047547
2024-08-07T12:38:00,646 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_LOG - Infer time: 0:00:47.073041
2024-08-07T12:38:00,646 [INFO ] W-9009-check_extract__m4_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:48124.52|#ModelName:check_extract__m4,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723014480,abc8ac07-5538-4a9a-946b-8ca73ca14316, pattern=[METRICS]
2024-08-07T12:38:00,647 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_METRICS - PredictionTime.ms:48124.52|#ModelName:check_extract__m4,Level:Model|#hostname:923a5f450f81,requestID:abc8ac07-5538-4a9a-946b-8ca73ca14316,timestamp:1723014480
2024-08-07T12:38:00,953 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:4.8498316511E7|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723014480
2024-08-07T12:38:00,953 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:524.28|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723014480
2024-08-07T12:38:00,953 [DEBUG] W-9009-check_extract__m4_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 524280, Backend time ns: 48498322507
2024-08-07T12:38:00,953 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723014480
2024-08-07T12:38:00,953 [INFO ] W-9009-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 48497
2024-08-07T12:38:00,953 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723014480
2024-08-07T12:38:00,968 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m5 for attempt 0
2024-08-07T12:38:00,968 [DEBUG] W-9006-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723014480968
2024-08-07T12:38:00,969 [INFO ] W-9006-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723014480969
2024-08-07T12:38:01,057 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_LOG - Backend received inference at: 1723014481
2024-08-07T12:46:13,226 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.999489
2024-08-07T12:46:13,227 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_LOG - Infer time: 0:08:11.166481
2024-08-07T12:46:13,227 [INFO ] W-9006-check_extract__m5_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:492169.0|#ModelName:check_extract__m5,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723014973,403a8a52-93cc-4be4-a1cb-482ec7646e4c, pattern=[METRICS]
2024-08-07T12:46:13,227 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_METRICS - PredictionTime.ms:492169.0|#ModelName:check_extract__m5,Level:Model|#hostname:923a5f450f81,requestID:403a8a52-93cc-4be4-a1cb-482ec7646e4c,timestamp:1723014973
2024-08-07T12:46:13,270 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:4.92301891816E8|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723014973
2024-08-07T12:46:13,270 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:330.103|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723014973
2024-08-07T12:46:13,271 [DEBUG] W-9006-check_extract__m5_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 330103, Backend time ns: 492302148970
2024-08-07T12:46:13,271 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723014973
2024-08-07T12:46:13,271 [INFO ] W-9006-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 492301
2024-08-07T12:46:13,271 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:Unknown,timestamp:1723014973
2024-08-07T12:46:13,271 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - post_processing for attempt 0
2024-08-07T12:46:13,271 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723014973271
2024-08-07T12:46:13,271 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723014973271
2024-08-07T12:46:13,273 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723014973
2024-08-07T12:46:17,866 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Completed Request: 06ef95668d2446be8d8ccc3762235811
2024-08-07T12:46:17,866 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Postprocess time: 0:00:04.590184
2024-08-07T12:46:17,866 [INFO ] W-9005-check_extract__post_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:4591.93|#ModelName:check_extract__post_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723014977,81b64a70-18d6-490f-bb5c-62fa12954d6c, pattern=[METRICS]
2024-08-07T12:46:17,866 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:4595203.865|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723014977
2024-08-07T12:46:17,866 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:4591.93|#ModelName:check_extract__post_processing,Level:Model|#hostname:923a5f450f81,requestID:81b64a70-18d6-490f-bb5c-62fa12954d6c,timestamp:1723014977
2024-08-07T12:46:17,867 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:150.608|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723014977
2024-08-07T12:46:17,867 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 150608, Backend time ns: 4595660236
2024-08-07T12:46:17,867 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723014977
2024-08-07T12:46:17,867 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 4595
2024-08-07T12:46:17,867 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723014977
2024-08-07T12:46:17,869 [INFO ] wf-manager-thread-6 ACCESS_LOG - /10.0.0.2:54426 "POST /wfpredict/check_extract HTTP/1.1" 200 745091
2024-08-07T12:46:17,870 [INFO ] wf-manager-thread-6 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:Unknown,timestamp:1723014977
2024-08-07T13:02:19,483 [INFO ] epollEventLoopGroup-3-9 org.pytorch.serve.http.api.rest.InferenceRequestHandler - org.pytorch.serve.util.messages.RequestInput@28d477d7
2024-08-07T13:02:19,485 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - pre_processing for attempt 0
2024-08-07T13:02:19,485 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723015939485
2024-08-07T13:02:19,485 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723015939485
2024-08-07T13:02:19,489 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723015939
2024-08-07T13:02:19,524 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input: /var/data/CHP_HOME/input7/input.txt
2024-08-07T13:02:19,524 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Output: /var/data/CHP_HOME/output7/2b50c86b41c54c6cb84962991e4083b6
2024-08-07T13:02:19,524 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input file read successfully!
2024-08-07T13:02:19,524 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.035028
2024-08-07T13:02:19,524 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:35.08|#ModelName:check_extract__pre_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723015939,029761b1-cefa-4e3e-8ea2-1f4f88d4275c, pattern=[METRICS]
2024-08-07T13:02:19,525 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:35.08|#ModelName:check_extract__pre_processing,Level:Model|#hostname:923a5f450f81,requestID:029761b1-cefa-4e3e-8ea2-1f4f88d4275c,timestamp:1723015939
2024-08-07T13:02:19,553 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:67658.422|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723015939
2024-08-07T13:02:19,553 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:137.244|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723015939
2024-08-07T13:02:19,553 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 137244, Backend time ns: 67983522
2024-08-07T13:02:19,553 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723015939
2024-08-07T13:02:19,553 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 67
2024-08-07T13:02:19,554 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723015939
2024-08-07T13:02:19,556 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m1 for attempt 0
2024-08-07T13:02:19,557 [DEBUG] W-9002-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723015939557
2024-08-07T13:02:19,557 [INFO ] W-9002-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723015939557
2024-08-07T13:02:19,563 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_LOG - Backend received inference at: 1723015939
2024-08-07T13:02:20,744 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.131874
2024-08-07T13:02:20,744 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_LOG - Infer time: 0:00:01.047798
2024-08-07T13:02:20,744 [INFO ] W-9002-check_extract__m1_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:1179.84|#ModelName:check_extract__m1,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723015940,26e392c4-912c-4130-be15-96564292c4e8, pattern=[METRICS]
2024-08-07T13:02:20,744 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_METRICS - PredictionTime.ms:1179.84|#ModelName:check_extract__m1,Level:Model|#hostname:923a5f450f81,requestID:26e392c4-912c-4130-be15-96564292c4e8,timestamp:1723015940
2024-08-07T13:02:20,839 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:1282143.07|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723015940
2024-08-07T13:02:20,839 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:344.247|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723015940
2024-08-07T13:02:20,839 [DEBUG] W-9002-check_extract__m1_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 344247, Backend time ns: 1282353149
2024-08-07T13:02:20,839 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723015940
2024-08-07T13:02:20,839 [INFO ] W-9002-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1281
2024-08-07T13:02:20,839 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723015940
2024-08-07T13:02:20,859 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - m2 for attempt 0
2024-08-07T13:02:20,859 [DEBUG] W-9004-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723015940859
2024-08-07T13:02:20,860 [INFO ] W-9004-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723015940860
2024-08-07T13:02:20,890 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_LOG - Backend received inference at: 1723015940
2024-08-07T13:02:23,128 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.439854
2024-08-07T13:02:23,128 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_LOG - Infer time: 0:00:01.797627
2024-08-07T13:02:23,128 [INFO ] W-9004-check_extract__m2_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:2237.85|#ModelName:check_extract__m2,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723015943,d9f3f0d1-1436-4cac-a66d-a17b1a94aa60, pattern=[METRICS]
2024-08-07T13:02:23,128 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_METRICS - PredictionTime.ms:2237.85|#ModelName:check_extract__m2,Level:Model|#hostname:923a5f450f81,requestID:d9f3f0d1-1436-4cac-a66d-a17b1a94aa60,timestamp:1723015943
2024-08-07T13:02:23,254 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:2394722.923|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723015943
2024-08-07T13:02:23,254 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:244.588|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723015943
2024-08-07T13:02:23,254 [DEBUG] W-9004-check_extract__m2_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 244588, Backend time ns: 2394986476
2024-08-07T13:02:23,254 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723015943
2024-08-07T13:02:23,255 [INFO ] W-9004-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2394
2024-08-07T13:02:23,255 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:Unknown,timestamp:1723015943
2024-08-07T13:02:23,273 [INFO ] wf-execute-thread-3 org.pytorch.serve.ensemble.DagExecutor - Invoking - m3 for attempt 0
2024-08-07T13:02:23,274 [DEBUG] W-9010-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723015943274
2024-08-07T13:02:23,274 [INFO ] W-9010-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723015943274
2024-08-07T13:02:23,326 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_LOG - Backend received inference at: 1723015943
2024-08-07T13:04:32,141 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.434066
2024-08-07T13:04:32,141 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_LOG - Infer time: 0:02:08.380605
2024-08-07T13:04:32,141 [INFO ] W-9010-check_extract__m3_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:128814.92|#ModelName:check_extract__m3,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723016072,82a76b45-f9bf-432d-8603-f3f490ec9043, pattern=[METRICS]
2024-08-07T13:04:32,141 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_METRICS - PredictionTime.ms:128814.92|#ModelName:check_extract__m3,Level:Model|#hostname:923a5f450f81,requestID:82a76b45-f9bf-432d-8603-f3f490ec9043,timestamp:1723016072
2024-08-07T13:04:32,260 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:1.28986021583E8|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723016072
2024-08-07T13:04:32,260 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:200.674|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723016072
2024-08-07T13:04:32,260 [DEBUG] W-9010-check_extract__m3_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 200674, Backend time ns: 128986345770
2024-08-07T13:04:32,260 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723016072
2024-08-07T13:04:32,260 [INFO ] W-9010-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 128986
2024-08-07T13:04:32,261 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723016072
2024-08-07T13:04:32,269 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - m4 for attempt 0
2024-08-07T13:04:32,269 [DEBUG] W-9008-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723016072269
2024-08-07T13:04:32,270 [INFO ] W-9008-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723016072270
2024-08-07T13:04:32,320 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_LOG - Backend received inference at: 1723016072
2024-08-07T13:04:49,996 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.490296
2024-08-07T13:04:49,996 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_LOG - Infer time: 0:00:17.185510
2024-08-07T13:04:49,996 [INFO ] W-9008-check_extract__m4_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:17676.04|#ModelName:check_extract__m4,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723016089,5443223c-1a34-43c2-8a0b-834f918d47c6, pattern=[METRICS]
2024-08-07T13:04:49,997 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_METRICS - PredictionTime.ms:17676.04|#ModelName:check_extract__m4,Level:Model|#hostname:923a5f450f81,requestID:5443223c-1a34-43c2-8a0b-834f918d47c6,timestamp:1723016089
2024-08-07T13:04:50,125 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:1.7855918614E7|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723016090
2024-08-07T13:04:50,125 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:258.859|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723016090
2024-08-07T13:04:50,125 [DEBUG] W-9008-check_extract__m4_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 258859, Backend time ns: 17856104650
2024-08-07T13:04:50,125 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723016090
2024-08-07T13:04:50,125 [INFO ] W-9008-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 17855
2024-08-07T13:04:50,126 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723016090
2024-08-07T13:04:50,131 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m5 for attempt 0
2024-08-07T13:04:50,132 [DEBUG] W-9007-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723016090132
2024-08-07T13:04:50,132 [INFO ] W-9007-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723016090132
2024-08-07T13:04:50,162 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_LOG - Backend received inference at: 1723016090
2024-08-07T13:08:26,117 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.462312
2024-08-07T13:08:26,117 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_LOG - Infer time: 0:03:35.485905
2024-08-07T13:08:26,118 [INFO ] W-9007-check_extract__m5_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:215948.49|#ModelName:check_extract__m5,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723016306,3c9dbbe4-a2b3-4de9-9f42-b9c3e8f6a19e, pattern=[METRICS]
2024-08-07T13:08:26,119 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_METRICS - PredictionTime.ms:215948.49|#ModelName:check_extract__m5,Level:Model|#hostname:923a5f450f81,requestID:3c9dbbe4-a2b3-4de9-9f42-b9c3e8f6a19e,timestamp:1723016306
2024-08-07T13:08:26,138 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:2.16005976989E8|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723016306
2024-08-07T13:08:26,138 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:154.052|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723016306
2024-08-07T13:08:26,138 [DEBUG] W-9007-check_extract__m5_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 154052, Backend time ns: 216006492875
2024-08-07T13:08:26,138 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723016306
2024-08-07T13:08:26,138 [INFO ] W-9007-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 216005
2024-08-07T13:08:26,138 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723016306
2024-08-07T13:08:26,140 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - post_processing for attempt 0
2024-08-07T13:08:26,140 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723016306140
2024-08-07T13:08:26,141 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723016306141
2024-08-07T13:08:26,143 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723016306
2024-08-07T13:08:28,110 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Completed Request: 2b50c86b41c54c6cb84962991e4083b6
2024-08-07T13:08:28,111 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Postprocess time: 0:00:01.965198
2024-08-07T13:08:28,111 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:1970809.366|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723016308
2024-08-07T13:08:28,111 [INFO ] W-9005-check_extract__post_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:1966.24|#ModelName:check_extract__post_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723016308,cdded6b9-4c7a-4d7e-adbd-9fcf5f347520, pattern=[METRICS]
2024-08-07T13:08:28,111 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:368.848|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723016308
2024-08-07T13:08:28,111 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 368848, Backend time ns: 1970996353
2024-08-07T13:08:28,111 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:1966.24|#ModelName:check_extract__post_processing,Level:Model|#hostname:923a5f450f81,requestID:cdded6b9-4c7a-4d7e-adbd-9fcf5f347520,timestamp:1723016308
2024-08-07T13:08:28,111 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723016308
2024-08-07T13:08:28,112 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1970
2024-08-07T13:08:28,112 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:Unknown,timestamp:1723016308
2024-08-07T13:08:28,114 [INFO ] wf-manager-thread-7 ACCESS_LOG - /10.0.0.2:50700 "POST /wfpredict/check_extract HTTP/1.1" 200 368632
2024-08-07T13:08:28,115 [INFO ] wf-manager-thread-7 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:Unknown,timestamp:1723016308
2024-08-07T13:24:03,509 [INFO ] epollEventLoopGroup-3-10 org.pytorch.serve.http.api.rest.InferenceRequestHandler - org.pytorch.serve.util.messages.RequestInput@74c01225
2024-08-07T13:24:03,511 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - pre_processing for attempt 0
2024-08-07T13:24:03,512 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723017243512
2024-08-07T13:24:03,512 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723017243512
2024-08-07T13:24:03,514 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723017243
2024-08-07T13:24:03,605 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input: /var/data/CHP_HOME/input7/input.txt
2024-08-07T13:24:03,605 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Output: /var/data/CHP_HOME/output7/3a9b19e383924f06a7c5659f76e3e22c
2024-08-07T13:24:03,605 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input file read successfully!
2024-08-07T13:24:03,605 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.089920
2024-08-07T13:24:03,605 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:89.99|#ModelName:check_extract__pre_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723017243,078156dd-b085-4245-9a23-1b3cab75e645, pattern=[METRICS]
2024-08-07T13:24:03,606 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:89.99|#ModelName:check_extract__pre_processing,Level:Model|#hostname:923a5f450f81,requestID:078156dd-b085-4245-9a23-1b3cab75e645,timestamp:1723017243
2024-08-07T13:24:03,666 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:153671.161|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723017243
2024-08-07T13:24:03,666 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:291.509|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723017243
2024-08-07T13:24:03,666 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 291509, Backend time ns: 154366096
2024-08-07T13:24:03,667 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723017243
2024-08-07T13:24:03,667 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 153
2024-08-07T13:24:03,667 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:Unknown,timestamp:1723017243
2024-08-07T13:24:03,670 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m1 for attempt 0
2024-08-07T13:24:03,670 [DEBUG] W-9001-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723017243670
2024-08-07T13:24:03,671 [INFO ] W-9001-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723017243671
2024-08-07T13:24:03,708 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_LOG - Backend received inference at: 1723017243
2024-08-07T13:24:06,874 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.366837
2024-08-07T13:24:06,874 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_LOG - Infer time: 0:00:02.822135
2024-08-07T13:24:06,874 [INFO ] W-9001-check_extract__m1_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:3189.47|#ModelName:check_extract__m1,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723017246,c94ad080-a6bc-4218-84b6-ce9b796ce8ea, pattern=[METRICS]
2024-08-07T13:24:06,874 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_METRICS - PredictionTime.ms:3189.47|#ModelName:check_extract__m1,Level:Model|#hostname:923a5f450f81,requestID:c94ad080-a6bc-4218-84b6-ce9b796ce8ea,timestamp:1723017246
2024-08-07T13:24:07,170 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:3499166.938|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723017247
2024-08-07T13:24:07,170 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:276.41|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723017247
2024-08-07T13:24:07,170 [DEBUG] W-9001-check_extract__m1_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 276410, Backend time ns: 3499361094
2024-08-07T13:24:07,170 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723017247
2024-08-07T13:24:07,170 [INFO ] W-9001-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3498
2024-08-07T13:24:07,170 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:Unknown,timestamp:1723017247
2024-08-07T13:24:07,185 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - m2 for attempt 0
2024-08-07T13:24:07,186 [DEBUG] W-9003-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723017247186
2024-08-07T13:24:07,186 [INFO ] W-9003-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723017247186
2024-08-07T13:24:07,266 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_LOG - Backend received inference at: 1723017247
2024-08-07T13:24:12,452 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.052664
2024-08-07T13:24:12,453 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_LOG - Infer time: 0:00:04.132361
2024-08-07T13:24:12,453 [INFO ] W-9003-check_extract__m2_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:5185.57|#ModelName:check_extract__m2,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723017252,21bf551e-aeee-4180-9861-d3f5e230980f, pattern=[METRICS]
2024-08-07T13:24:12,453 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_METRICS - PredictionTime.ms:5185.57|#ModelName:check_extract__m2,Level:Model|#hostname:923a5f450f81,requestID:21bf551e-aeee-4180-9861-d3f5e230980f,timestamp:1723017252
2024-08-07T13:24:12,768 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:5582006.759|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723017252
2024-08-07T13:24:12,768 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:376.757|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723017252
2024-08-07T13:24:12,768 [DEBUG] W-9003-check_extract__m2_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 376757, Backend time ns: 5582125613
2024-08-07T13:24:12,768 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723017252
2024-08-07T13:24:12,768 [INFO ] W-9003-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 5582
2024-08-07T13:24:12,768 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723017252
2024-08-07T13:24:12,813 [INFO ] wf-execute-thread-3 org.pytorch.serve.ensemble.DagExecutor - Invoking - m3 for attempt 0
2024-08-07T13:24:12,814 [DEBUG] W-9011-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723017252814
2024-08-07T13:24:12,814 [INFO ] W-9011-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723017252814
2024-08-07T13:24:12,951 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_LOG - Backend received inference at: 1723017252
2024-08-07T13:27:47,077 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.143014
2024-08-07T13:27:47,077 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_LOG - Infer time: 0:03:32.981579
2024-08-07T13:27:47,077 [INFO ] W-9011-check_extract__m3_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:214125.22|#ModelName:check_extract__m3,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723017467,bbf7ce63-95f0-4fbd-af71-97f04e60fbd7, pattern=[METRICS]
2024-08-07T13:27:47,078 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_METRICS - PredictionTime.ms:214125.22|#ModelName:check_extract__m3,Level:Model|#hostname:923a5f450f81,requestID:bbf7ce63-95f0-4fbd-af71-97f04e60fbd7,timestamp:1723017467
2024-08-07T13:27:47,425 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:2.1461147091E8|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723017467
2024-08-07T13:27:47,426 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:340.59|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723017467
2024-08-07T13:27:47,426 [DEBUG] W-9011-check_extract__m3_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 340590, Backend time ns: 214611610257
2024-08-07T13:27:47,426 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723017467
2024-08-07T13:27:47,426 [INFO ] W-9011-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 214611
2024-08-07T13:27:47,426 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723017467
2024-08-07T13:27:47,454 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - m4 for attempt 0
2024-08-07T13:27:47,454 [DEBUG] W-9009-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723017467454
2024-08-07T13:27:47,455 [INFO ] W-9009-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723017467455
2024-08-07T13:27:47,565 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_LOG - Backend received inference at: 1723017467
2024-08-07T13:28:31,427 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.227225
2024-08-07T13:28:31,427 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_LOG - Infer time: 0:00:42.633890
2024-08-07T13:28:31,428 [INFO ] W-9009-check_extract__m4_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:43861.64|#ModelName:check_extract__m4,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723017511,5a935bc1-f6bc-4476-9e25-d7842d225dac, pattern=[METRICS]
2024-08-07T13:28:31,428 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_METRICS - PredictionTime.ms:43861.64|#ModelName:check_extract__m4,Level:Model|#hostname:923a5f450f81,requestID:5a935bc1-f6bc-4476-9e25-d7842d225dac,timestamp:1723017511
2024-08-07T13:28:31,785 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:4.4331321527E7|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723017511
2024-08-07T13:28:31,786 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:414.348|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723017511
2024-08-07T13:28:31,786 [DEBUG] W-9009-check_extract__m4_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 414348, Backend time ns: 44331404139
2024-08-07T13:28:31,786 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723017511
2024-08-07T13:28:31,786 [INFO ] W-9009-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 44330
2024-08-07T13:28:31,797 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:13.0|#Level:Host|#hostname:Unknown,timestamp:1723017511
2024-08-07T13:28:31,811 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m5 for attempt 0
2024-08-07T13:28:31,812 [DEBUG] W-9006-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723017511812
2024-08-07T13:28:31,812 [INFO ] W-9006-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723017511812
2024-08-07T13:28:31,897 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_LOG - Backend received inference at: 1723017511
2024-08-07T13:35:57,349 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.069576
2024-08-07T13:35:57,350 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_LOG - Infer time: 0:07:24.379070
2024-08-07T13:35:57,350 [INFO ] W-9006-check_extract__m5_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:445449.3|#ModelName:check_extract__m5,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723017957,a0e0846f-4a12-4ba5-a6b5-5a490e71216a, pattern=[METRICS]
2024-08-07T13:35:57,351 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_METRICS - PredictionTime.ms:445449.3|#ModelName:check_extract__m5,Level:Model|#hostname:923a5f450f81,requestID:a0e0846f-4a12-4ba5-a6b5-5a490e71216a,timestamp:1723017957
2024-08-07T13:35:57,394 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:4.45581920795E8|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723017957
2024-08-07T13:35:57,394 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:236.86|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723017957
2024-08-07T13:35:57,394 [DEBUG] W-9006-check_extract__m5_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 236860, Backend time ns: 445582165213
2024-08-07T13:35:57,394 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723017957
2024-08-07T13:35:57,394 [INFO ] W-9006-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 445581
2024-08-07T13:35:57,394 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723017957
2024-08-07T13:35:57,413 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - post_processing for attempt 0
2024-08-07T13:35:57,413 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723017957413
2024-08-07T13:35:57,413 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723017957413
2024-08-07T13:35:57,416 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723017957
2024-08-07T13:36:03,924 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Completed Request: 3a9b19e383924f06a7c5659f76e3e22c
2024-08-07T13:36:03,924 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Postprocess time: 0:00:06.504946
2024-08-07T13:36:03,924 [INFO ] W-9005-check_extract__post_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:6506.71|#ModelName:check_extract__post_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723017963,5fd4fd3b-0ed8-4489-8721-eeabd8615d71, pattern=[METRICS]
2024-08-07T13:36:03,924 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:6506.71|#ModelName:check_extract__post_processing,Level:Model|#hostname:923a5f450f81,requestID:5fd4fd3b-0ed8-4489-8721-eeabd8615d71,timestamp:1723017963
2024-08-07T13:36:03,925 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:6511731.516|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723017963
2024-08-07T13:36:03,925 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:257.887|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723017963
2024-08-07T13:36:03,925 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 257887, Backend time ns: 6511901513
2024-08-07T13:36:03,925 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723017963
2024-08-07T13:36:03,925 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 6511
2024-08-07T13:36:03,925 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723017963
2024-08-07T13:36:03,927 [INFO ] wf-manager-thread-8 ACCESS_LOG - /10.0.0.2:46524 "POST /wfpredict/check_extract HTTP/1.1" 200 720419
2024-08-07T13:36:03,929 [INFO ] wf-manager-thread-8 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:Unknown,timestamp:1723017963
2024-08-07T13:50:52,676 [INFO ] epollEventLoopGroup-3-11 org.pytorch.serve.http.api.rest.InferenceRequestHandler - org.pytorch.serve.util.messages.RequestInput@10254dfd
2024-08-07T13:50:52,679 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - pre_processing for attempt 0
2024-08-07T13:50:52,679 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723018852679
2024-08-07T13:50:52,679 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723018852679
2024-08-07T13:50:52,681 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723018852
2024-08-07T13:50:52,759 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input: /var/data/CHP_HOME/input7/input.txt
2024-08-07T13:50:52,759 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Output: /var/data/CHP_HOME/output7/c7344a73d9174685bf830a02bc2f7bf4
2024-08-07T13:50:52,760 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input file read successfully!
2024-08-07T13:50:52,760 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.077601
2024-08-07T13:50:52,760 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:77.67|#ModelName:check_extract__pre_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723018852,d974ea75-0046-4918-a0e0-b416cc1e9b72, pattern=[METRICS]
2024-08-07T13:50:52,760 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:77.67|#ModelName:check_extract__pre_processing,Level:Model|#hostname:923a5f450f81,requestID:d974ea75-0046-4918-a0e0-b416cc1e9b72,timestamp:1723018852
2024-08-07T13:50:52,818 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:139169.27|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723018852
2024-08-07T13:50:52,818 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:245.826|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723018852
2024-08-07T13:50:52,819 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 245826, Backend time ns: 139323847
2024-08-07T13:50:52,819 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723018852
2024-08-07T13:50:52,819 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 139
2024-08-07T13:50:52,819 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723018852
2024-08-07T13:50:52,822 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m1 for attempt 0
2024-08-07T13:50:52,823 [DEBUG] W-9002-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723018852823
2024-08-07T13:50:52,823 [INFO ] W-9002-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723018852823
2024-08-07T13:50:52,832 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_LOG - Backend received inference at: 1723018852
2024-08-07T13:50:55,737 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.301371
2024-08-07T13:50:55,737 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_LOG - Infer time: 0:00:02.603456
2024-08-07T13:50:55,737 [INFO ] W-9002-check_extract__m1_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:2905.18|#ModelName:check_extract__m1,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723018855,0d5466e3-b2bc-4905-869e-e42340ef9c74, pattern=[METRICS]
2024-08-07T13:50:55,738 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_METRICS - PredictionTime.ms:2905.18|#ModelName:check_extract__m1,Level:Model|#hostname:923a5f450f81,requestID:0d5466e3-b2bc-4905-869e-e42340ef9c74,timestamp:1723018855
2024-08-07T13:50:56,042 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:3219145.61|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723018856
2024-08-07T13:50:56,042 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:353.86|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723018856
2024-08-07T13:50:56,042 [DEBUG] W-9002-check_extract__m1_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 353860, Backend time ns: 3219266284
2024-08-07T13:50:56,042 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723018856
2024-08-07T13:50:56,043 [INFO ] W-9002-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3219
2024-08-07T13:50:56,043 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723018856
2024-08-07T13:50:56,060 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - m2 for attempt 0
2024-08-07T13:50:56,061 [DEBUG] W-9004-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723018856061
2024-08-07T13:50:56,061 [INFO ] W-9004-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723018856061
2024-08-07T13:50:56,148 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_LOG - Backend received inference at: 1723018856
2024-08-07T13:51:01,678 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.112767
2024-08-07T13:51:01,678 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_LOG - Infer time: 0:00:04.416766
2024-08-07T13:51:01,678 [INFO ] W-9004-check_extract__m2_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:5529.87|#ModelName:check_extract__m2,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723018861,423561b3-edf4-4b6e-873a-9e03c45b4506, pattern=[METRICS]
2024-08-07T13:51:01,678 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_METRICS - PredictionTime.ms:5529.87|#ModelName:check_extract__m2,Level:Model|#hostname:923a5f450f81,requestID:423561b3-edf4-4b6e-873a-9e03c45b4506,timestamp:1723018861
2024-08-07T13:51:01,941 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:5879979.699|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723018861
2024-08-07T13:51:01,941 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:254.772|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723018861
2024-08-07T13:51:01,941 [DEBUG] W-9004-check_extract__m2_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 254772, Backend time ns: 5880018197
2024-08-07T13:51:01,941 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723018861
2024-08-07T13:51:01,941 [INFO ] W-9004-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 5880
2024-08-07T13:51:01,941 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723018861
2024-08-07T13:51:01,958 [INFO ] wf-execute-thread-3 org.pytorch.serve.ensemble.DagExecutor - Invoking - m3 for attempt 0
2024-08-07T13:51:01,958 [DEBUG] W-9010-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723018861958
2024-08-07T13:51:01,959 [INFO ] W-9010-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723018861959
2024-08-07T13:51:02,041 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_LOG - Backend received inference at: 1723018862
2024-08-07T13:55:26,502 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.243016
2024-08-07T13:55:26,502 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_LOG - Infer time: 0:04:23.216866
2024-08-07T13:55:26,502 [INFO ] W-9010-check_extract__m3_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:264460.28|#ModelName:check_extract__m3,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723019126,cacbdb1f-caf8-4ce3-a86e-b309a8b6585e, pattern=[METRICS]
2024-08-07T13:55:26,502 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_METRICS - PredictionTime.ms:264460.28|#ModelName:check_extract__m3,Level:Model|#hostname:923a5f450f81,requestID:cacbdb1f-caf8-4ce3-a86e-b309a8b6585e,timestamp:1723019126
2024-08-07T13:55:26,790 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:2.64830008315E8|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723019126
2024-08-07T13:55:26,791 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:158.048|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723019126
2024-08-07T13:55:26,791 [DEBUG] W-9010-check_extract__m3_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 158048, Backend time ns: 264832581565
2024-08-07T13:55:26,791 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723019126
2024-08-07T13:55:26,792 [INFO ] W-9010-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 264829
2024-08-07T13:55:26,792 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:5.0|#Level:Host|#hostname:Unknown,timestamp:1723019126
2024-08-07T13:55:26,821 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - m4 for attempt 0
2024-08-07T13:55:26,822 [DEBUG] W-9008-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723019126821
2024-08-07T13:55:26,822 [INFO ] W-9008-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723019126822
2024-08-07T13:55:26,971 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_LOG - Backend received inference at: 1723019126
2024-08-07T13:56:08,415 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.325064
2024-08-07T13:56:08,415 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_LOG - Infer time: 0:00:40.118610
2024-08-07T13:56:08,416 [INFO ] W-9008-check_extract__m4_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:41444.23|#ModelName:check_extract__m4,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723019168,71f86f25-8355-4532-90c8-3f68310dddb2, pattern=[METRICS]
2024-08-07T13:56:08,416 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_METRICS - PredictionTime.ms:41444.23|#ModelName:check_extract__m4,Level:Model|#hostname:923a5f450f81,requestID:71f86f25-8355-4532-90c8-3f68310dddb2,timestamp:1723019168
2024-08-07T13:56:08,754 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:4.1933274911E7|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723019168
2024-08-07T13:56:08,754 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:583.189|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723019168
2024-08-07T13:56:08,755 [DEBUG] W-9008-check_extract__m4_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 583189, Backend time ns: 41933148396
2024-08-07T13:56:08,755 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723019168
2024-08-07T13:56:08,755 [INFO ] W-9008-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 41932
2024-08-07T13:56:08,755 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:Unknown,timestamp:1723019168
2024-08-07T13:56:08,769 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m5 for attempt 0
2024-08-07T13:56:08,770 [DEBUG] W-9007-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723019168770
2024-08-07T13:56:08,770 [INFO ] W-9007-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723019168770
2024-08-07T13:56:08,876 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_LOG - Backend received inference at: 1723019168
2024-08-07T14:08:09,184 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.082973
2024-08-07T14:08:09,184 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_LOG - Infer time: 0:11:59.222651
2024-08-07T14:08:09,184 [INFO ] W-9007-check_extract__m5_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:720306.22|#ModelName:check_extract__m5,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723019889,9b8307b9-ea3b-455b-90e7-383cfd3e9f2a, pattern=[METRICS]
2024-08-07T14:08:09,185 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_METRICS - PredictionTime.ms:720306.22|#ModelName:check_extract__m5,Level:Model|#hostname:923a5f450f81,requestID:9b8307b9-ea3b-455b-90e7-383cfd3e9f2a,timestamp:1723019889
2024-08-07T14:08:09,233 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:7.2046338696E8|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723019889
2024-08-07T14:08:09,233 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:344.27|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723019889
2024-08-07T14:08:09,233 [DEBUG] W-9007-check_extract__m5_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 344270, Backend time ns: 720463569415
2024-08-07T14:08:09,233 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723019889
2024-08-07T14:08:09,233 [INFO ] W-9007-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 720463
2024-08-07T14:08:09,233 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723019889
2024-08-07T14:08:09,235 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - post_processing for attempt 0
2024-08-07T14:08:09,236 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723019889236
2024-08-07T14:08:09,236 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723019889236
2024-08-07T14:08:09,239 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723019889
2024-08-07T14:08:15,716 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Completed Request: c7344a73d9174685bf830a02bc2f7bf4
2024-08-07T14:08:15,716 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Postprocess time: 0:00:06.474580
2024-08-07T14:08:15,716 [INFO ] W-9005-check_extract__post_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:6476.17|#ModelName:check_extract__post_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723019895,7f0e59ea-ddd1-46b5-a028-23ee1e7dc50a, pattern=[METRICS]
2024-08-07T14:08:15,716 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:6480587.351|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723019895
2024-08-07T14:08:15,716 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:6476.17|#ModelName:check_extract__post_processing,Level:Model|#hostname:923a5f450f81,requestID:7f0e59ea-ddd1-46b5-a028-23ee1e7dc50a,timestamp:1723019895
2024-08-07T14:08:15,716 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:354.826|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723019895
2024-08-07T14:08:15,717 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 354826, Backend time ns: 6480731051
2024-08-07T14:08:15,717 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723019895
2024-08-07T14:08:15,717 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 6480
2024-08-07T14:08:15,717 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723019895
2024-08-07T14:08:15,719 [INFO ] wf-manager-thread-9 ACCESS_LOG - /10.0.0.2:59812 "POST /wfpredict/check_extract HTTP/1.1" 200 1043044
2024-08-07T14:08:15,720 [INFO ] wf-manager-thread-9 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:Unknown,timestamp:1723019895
2024-08-07T14:08:15,723 [ERROR] epollEventLoopGroup-3-11 org.pytorch.serve.http.HttpRequestHandler -
io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
2024-08-07T14:36:11,302 [INFO ] epollEventLoopGroup-3-12 org.pytorch.serve.http.api.rest.InferenceRequestHandler - org.pytorch.serve.util.messages.RequestInput@4b3cc7c1
2024-08-07T14:36:11,305 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - pre_processing for attempt 0
2024-08-07T14:36:11,305 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723021571305
2024-08-07T14:36:11,306 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723021571305
2024-08-07T14:36:11,307 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723021571
2024-08-07T14:36:11,395 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input: /var/data/CHP_HOME/input7/input.txt
2024-08-07T14:36:11,395 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Output: /var/data/CHP_HOME/output7/1943530fa750430dacf606a0d1fdaf3a
2024-08-07T14:36:11,395 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input file read successfully!
2024-08-07T14:36:11,396 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.087861
2024-08-07T14:36:11,396 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:87.92|#ModelName:check_extract__pre_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723021571,826e4f9d-b976-4b49-9c22-eecc055cc5e4, pattern=[METRICS]
2024-08-07T14:36:11,396 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:87.92|#ModelName:check_extract__pre_processing,Level:Model|#hostname:923a5f450f81,requestID:826e4f9d-b976-4b49-9c22-eecc055cc5e4,timestamp:1723021571
2024-08-07T14:36:11,453 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:147765.834|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723021571
2024-08-07T14:36:11,453 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:279.403|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723021571
2024-08-07T14:36:11,453 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 279403, Backend time ns: 147977691
2024-08-07T14:36:11,453 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723021571
2024-08-07T14:36:11,453 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 147
2024-08-07T14:36:11,454 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:Unknown,timestamp:1723021571
2024-08-07T14:36:11,457 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m1 for attempt 0
2024-08-07T14:36:11,458 [DEBUG] W-9001-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723021571458
2024-08-07T14:36:11,458 [INFO ] W-9001-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723021571458
2024-08-07T14:36:11,464 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_LOG - Backend received inference at: 1723021571
2024-08-07T14:36:14,434 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.311975
2024-08-07T14:36:14,434 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_LOG - Infer time: 0:00:02.657535
2024-08-07T14:36:14,434 [INFO ] W-9001-check_extract__m1_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:2969.73|#ModelName:check_extract__m1,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723021574,224fa242-ef53-4b3f-b91d-a21ec226426d, pattern=[METRICS]
2024-08-07T14:36:14,434 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_METRICS - PredictionTime.ms:2969.73|#ModelName:check_extract__m1,Level:Model|#hostname:923a5f450f81,requestID:224fa242-ef53-4b3f-b91d-a21ec226426d,timestamp:1723021574
2024-08-07T14:36:14,698 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:3240667.066|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723021574
2024-08-07T14:36:14,698 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:288.653|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723021574
2024-08-07T14:36:14,699 [DEBUG] W-9001-check_extract__m1_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 288653, Backend time ns: 3240849880
2024-08-07T14:36:14,699 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723021574
2024-08-07T14:36:14,699 [INFO ] W-9001-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3240
2024-08-07T14:36:14,699 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723021574
2024-08-07T14:36:14,714 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - m2 for attempt 0
2024-08-07T14:36:14,715 [DEBUG] W-9003-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723021574715
2024-08-07T14:36:14,715 [INFO ] W-9003-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723021574715
2024-08-07T14:36:14,796 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_LOG - Backend received inference at: 1723021574
2024-08-07T14:36:20,075 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.034803
2024-08-07T14:36:20,075 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_LOG - Infer time: 0:00:04.242982
2024-08-07T14:36:20,075 [INFO ] W-9003-check_extract__m2_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:5278.09|#ModelName:check_extract__m2,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723021580,f7e28d25-912f-41bc-a0f2-5b2ca5b013b2, pattern=[METRICS]
2024-08-07T14:36:20,075 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_METRICS - PredictionTime.ms:5278.09|#ModelName:check_extract__m2,Level:Model|#hostname:923a5f450f81,requestID:f7e28d25-912f-41bc-a0f2-5b2ca5b013b2,timestamp:1723021580
2024-08-07T14:36:20,365 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:5649946.513|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723021580
2024-08-07T14:36:20,365 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:280.795|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723021580
2024-08-07T14:36:20,365 [DEBUG] W-9003-check_extract__m2_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 280795, Backend time ns: 5650123031
2024-08-07T14:36:20,365 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723021580
2024-08-07T14:36:20,366 [INFO ] W-9003-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 5649
2024-08-07T14:36:20,366 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:Unknown,timestamp:1723021580
2024-08-07T14:36:20,387 [INFO ] wf-execute-thread-3 org.pytorch.serve.ensemble.DagExecutor - Invoking - m3 for attempt 0
2024-08-07T14:36:20,387 [DEBUG] W-9011-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723021580387
2024-08-07T14:36:20,388 [INFO ] W-9011-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723021580388
2024-08-07T14:36:20,497 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_LOG - Backend received inference at: 1723021580
2024-08-07T14:39:45,546 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.143047
2024-08-07T14:39:45,546 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_LOG - Infer time: 0:03:23.904989
2024-08-07T14:39:45,546 [INFO ] W-9011-check_extract__m3_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:205048.48|#ModelName:check_extract__m3,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723021785,5843485a-5ff9-40ec-a14e-ffc7060f5beb, pattern=[METRICS]
2024-08-07T14:39:45,547 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_METRICS - PredictionTime.ms:205048.48|#ModelName:check_extract__m3,Level:Model|#hostname:923a5f450f81,requestID:5843485a-5ff9-40ec-a14e-ffc7060f5beb,timestamp:1723021785
2024-08-07T14:39:45,887 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:2.05500266128E8|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723021785
2024-08-07T14:39:45,888 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:299.753|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723021785
2024-08-07T14:39:45,888 [DEBUG] W-9011-check_extract__m3_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 299753, Backend time ns: 205500430690
2024-08-07T14:39:45,888 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723021785
2024-08-07T14:39:45,888 [INFO ] W-9011-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 205499
2024-08-07T14:39:45,888 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:Unknown,timestamp:1723021785
2024-08-07T14:39:45,914 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - m4 for attempt 0
2024-08-07T14:39:45,915 [DEBUG] W-9009-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723021785914
2024-08-07T14:39:45,915 [INFO ] W-9009-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723021785915
2024-08-07T14:39:46,002 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_LOG - Backend received inference at: 1723021786
2024-08-07T14:40:30,323 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.095320
2024-08-07T14:40:30,324 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_LOG - Infer time: 0:00:43.222828
2024-08-07T14:40:30,324 [INFO ] W-9009-check_extract__m4_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:44318.52|#ModelName:check_extract__m4,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723021830,167fadda-9ad7-49e1-b680-6fd2308c12d2, pattern=[METRICS]
2024-08-07T14:40:30,324 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_METRICS - PredictionTime.ms:44318.52|#ModelName:check_extract__m4,Level:Model|#hostname:923a5f450f81,requestID:167fadda-9ad7-49e1-b680-6fd2308c12d2,timestamp:1723021830
2024-08-07T14:40:30,620 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:4.4705299287E7|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723021830
2024-08-07T14:40:30,620 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:244.785|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723021830
2024-08-07T14:40:30,620 [DEBUG] W-9009-check_extract__m4_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 244785, Backend time ns: 44705340148
2024-08-07T14:40:30,620 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723021830
2024-08-07T14:40:30,620 [INFO ] W-9009-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 44704
2024-08-07T14:40:30,620 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:Unknown,timestamp:1723021830
2024-08-07T14:40:30,633 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m5 for attempt 0
2024-08-07T14:40:30,634 [DEBUG] W-9006-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723021830634
2024-08-07T14:40:30,634 [INFO ] W-9006-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723021830634
2024-08-07T14:40:30,707 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_LOG - Backend received inference at: 1723021830
2024-08-07T14:49:53,703 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.044202
2024-08-07T14:49:53,703 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_LOG - Infer time: 0:09:21.947602
2024-08-07T14:49:53,703 [INFO ] W-9006-check_extract__m5_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:562992.48|#ModelName:check_extract__m5,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723022393,2ecd42e3-639f-445c-adc4-57b61d3bb7e9, pattern=[METRICS]
2024-08-07T14:49:53,704 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_METRICS - PredictionTime.ms:562992.48|#ModelName:check_extract__m5,Level:Model|#hostname:923a5f450f81,requestID:2ecd42e3-639f-445c-adc4-57b61d3bb7e9,timestamp:1723022393
2024-08-07T14:49:53,749 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:5.63115307385E8|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723022393
2024-08-07T14:49:53,749 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:172.826|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723022393
2024-08-07T14:49:53,749 [DEBUG] W-9006-check_extract__m5_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 172826, Backend time ns: 563115774549
2024-08-07T14:49:53,750 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723022393
2024-08-07T14:49:53,750 [INFO ] W-9006-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 563115
2024-08-07T14:49:53,750 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723022393
2024-08-07T14:49:53,750 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - post_processing for attempt 0
2024-08-07T14:49:53,751 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723022393751
2024-08-07T14:49:53,751 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723022393751
2024-08-07T14:49:53,754 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723022393
2024-08-07T14:49:58,277 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Completed Request: 1943530fa750430dacf606a0d1fdaf3a
2024-08-07T14:49:58,278 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Postprocess time: 0:00:04.520349
2024-08-07T14:49:58,278 [INFO ] W-9005-check_extract__post_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:4522.12|#ModelName:check_extract__post_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723022398,766598bd-7831-4cf1-a7ea-9925decda59b, pattern=[METRICS]
2024-08-07T14:49:58,278 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:4522.12|#ModelName:check_extract__post_processing,Level:Model|#hostname:923a5f450f81,requestID:766598bd-7831-4cf1-a7ea-9925decda59b,timestamp:1723022398
2024-08-07T14:49:58,278 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:4527397.079|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723022398
2024-08-07T14:49:58,278 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:262.194|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723022398
2024-08-07T14:49:58,278 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 262194, Backend time ns: 4527577629
2024-08-07T14:49:58,279 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723022398
2024-08-07T14:49:58,279 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 4527
2024-08-07T14:49:58,279 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723022398
2024-08-07T14:49:58,280 [INFO ] wf-manager-thread-10 ACCESS_LOG - /10.0.0.2:41916 "POST /wfpredict/check_extract HTTP/1.1" 200 826978
2024-08-07T14:49:58,281 [INFO ] wf-manager-thread-10 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:Unknown,timestamp:1723022398
2024-08-07T15:14:54,192 [INFO ] epollEventLoopGroup-3-13 org.pytorch.serve.http.api.rest.InferenceRequestHandler - org.pytorch.serve.util.messages.RequestInput@d52184f
2024-08-07T15:14:54,195 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - pre_processing for attempt 0
2024-08-07T15:14:54,195 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723023894195
2024-08-07T15:14:54,195 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723023894195
2024-08-07T15:14:54,197 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723023894
2024-08-07T15:14:54,279 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input: /var/data/CHP_HOME/input7/input.txt
2024-08-07T15:14:54,279 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Output: /var/data/CHP_HOME/output7/9d4020ddc03548b0bc32c0a7bdce788b
2024-08-07T15:14:54,279 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input file read successfully!
2024-08-07T15:14:54,280 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.081173
2024-08-07T15:14:54,280 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:81.28|#ModelName:check_extract__pre_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723023894,ec2dbf03-85cf-428e-9934-413b8cdee917, pattern=[METRICS]
2024-08-07T15:14:54,280 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:81.28|#ModelName:check_extract__pre_processing,Level:Model|#hostname:923a5f450f81,requestID:ec2dbf03-85cf-428e-9934-413b8cdee917,timestamp:1723023894
2024-08-07T15:14:54,337 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:142235.051|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723023894
2024-08-07T15:14:54,338 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:217.805|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723023894
2024-08-07T15:14:54,338 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 217805, Backend time ns: 142511550
2024-08-07T15:14:54,338 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723023894
2024-08-07T15:14:54,338 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 142
2024-08-07T15:14:54,338 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723023894
2024-08-07T15:14:54,342 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m1 for attempt 0
2024-08-07T15:14:54,342 [DEBUG] W-9002-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723023894342
2024-08-07T15:14:54,343 [INFO ] W-9002-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723023894343
2024-08-07T15:14:54,351 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_LOG - Backend received inference at: 1723023894
2024-08-07T15:14:57,287 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.305826
2024-08-07T15:14:57,356 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_LOG - Infer time: 0:00:02.628870
2024-08-07T15:14:57,356 [INFO ] W-9002-check_extract__m1_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:2934.96|#ModelName:check_extract__m1,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723023897,7f2562cc-3c79-4531-8189-992d21330f81, pattern=[METRICS]
2024-08-07T15:14:57,356 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_METRICS - PredictionTime.ms:2934.96|#ModelName:check_extract__m1,Level:Model|#hostname:923a5f450f81,requestID:7f2562cc-3c79-4531-8189-992d21330f81,timestamp:1723023897
2024-08-07T15:14:57,519 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:3176750.95|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723023897
2024-08-07T15:14:57,519 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:445.525|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723023897
2024-08-07T15:14:57,519 [DEBUG] W-9002-check_extract__m1_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 445525, Backend time ns: 3176710074
2024-08-07T15:14:57,519 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723023897
2024-08-07T15:14:57,519 [INFO ] W-9002-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3176
2024-08-07T15:14:57,519 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723023897
2024-08-07T15:14:57,536 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - m2 for attempt 0
2024-08-07T15:14:57,536 [DEBUG] W-9004-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723023897536
2024-08-07T15:14:57,537 [INFO ] W-9004-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723023897537
2024-08-07T15:14:59,685 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_LOG - Backend received inference at: 1723023899
2024-08-07T15:15:05,357 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.157503
2024-08-07T15:15:05,357 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_LOG - Infer time: 0:00:04.514369
2024-08-07T15:15:05,357 [INFO ] W-9004-check_extract__m2_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:5672.25|#ModelName:check_extract__m2,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723023905,de9edb0e-a19d-4faa-9394-4e829e30c425, pattern=[METRICS]
2024-08-07T15:15:05,357 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_METRICS - PredictionTime.ms:5672.25|#ModelName:check_extract__m2,Level:Model|#hostname:923a5f450f81,requestID:de9edb0e-a19d-4faa-9394-4e829e30c425,timestamp:1723023905
2024-08-07T15:15:05,656 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:8119395.094|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723023905
2024-08-07T15:15:05,656 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:297.866|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723023905
2024-08-07T15:15:05,656 [DEBUG] W-9004-check_extract__m2_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 297866, Backend time ns: 8119414813
2024-08-07T15:15:05,656 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723023905
2024-08-07T15:15:05,656 [INFO ] W-9004-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 8118
2024-08-07T15:15:05,656 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:Unknown,timestamp:1723023905
2024-08-07T15:15:05,669 [INFO ] wf-execute-thread-3 org.pytorch.serve.ensemble.DagExecutor - Invoking - m3 for attempt 0
2024-08-07T15:15:05,670 [DEBUG] W-9010-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723023905670
2024-08-07T15:15:05,670 [INFO ] W-9010-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723023905670
2024-08-07T15:15:05,759 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_LOG - Backend received inference at: 1723023905
2024-08-07T15:19:24,442 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.084652
2024-08-07T15:19:24,442 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_LOG - Infer time: 0:04:17.597480
2024-08-07T15:19:24,443 [INFO ] W-9010-check_extract__m3_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:258682.57|#ModelName:check_extract__m3,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723024164,ebe1da24-5d95-4bad-a893-adb78bd88b94, pattern=[METRICS]
2024-08-07T15:19:24,443 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_METRICS - PredictionTime.ms:258682.57|#ModelName:check_extract__m3,Level:Model|#hostname:923a5f450f81,requestID:ebe1da24-5d95-4bad-a893-adb78bd88b94,timestamp:1723024164
2024-08-07T15:19:24,730 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:2.5906014205E8|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723024164
2024-08-07T15:19:24,730 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:262.492|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723024164
2024-08-07T15:19:24,730 [DEBUG] W-9010-check_extract__m3_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 262492, Backend time ns: 259060344282
2024-08-07T15:19:24,730 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723024164
2024-08-07T15:19:24,730 [INFO ] W-9010-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 259060
2024-08-07T15:19:24,730 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723024164
2024-08-07T15:19:24,760 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - m4 for attempt 0
2024-08-07T15:19:24,760 [DEBUG] W-9008-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723024164760
2024-08-07T15:19:24,761 [INFO ] W-9008-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723024164761
2024-08-07T15:19:25,066 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_LOG - Backend received inference at: 1723024165
2024-08-07T15:20:08,721 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.163907
2024-08-07T15:20:08,721 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_LOG - Infer time: 0:00:42.489846
2024-08-07T15:20:08,721 [INFO ] W-9008-check_extract__m4_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:43654.28|#ModelName:check_extract__m4,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723024208,c7c85f63-8b5b-4777-b5b1-a2e54ef8a04d, pattern=[METRICS]
2024-08-07T15:20:08,721 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_METRICS - PredictionTime.ms:43654.28|#ModelName:check_extract__m4,Level:Model|#hostname:923a5f450f81,requestID:c7c85f63-8b5b-4777-b5b1-a2e54ef8a04d,timestamp:1723024208
2024-08-07T15:20:09,082 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:4.4321820633E7|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723024209
2024-08-07T15:20:09,082 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:289.263|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723024209
2024-08-07T15:20:09,082 [DEBUG] W-9008-check_extract__m4_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 289263, Backend time ns: 44321971825
2024-08-07T15:20:09,082 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723024209
2024-08-07T15:20:09,082 [INFO ] W-9008-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 44321
2024-08-07T15:20:09,082 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723024209
2024-08-07T15:20:09,101 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m5 for attempt 0
2024-08-07T15:20:09,101 [DEBUG] W-9007-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723024209101
2024-08-07T15:20:09,102 [INFO ] W-9007-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723024209102
2024-08-07T15:20:09,173 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_LOG - Backend received inference at: 1723024209
2024-08-07T15:30:36,356 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.111619
2024-08-07T15:30:36,356 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_LOG - Infer time: 0:10:26.068321
2024-08-07T15:30:36,356 [INFO ] W-9007-check_extract__m5_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:627180.55|#ModelName:check_extract__m5,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723024836,5fb4cd9e-c949-42da-95c6-7c2341937d09, pattern=[METRICS]
2024-08-07T15:30:36,356 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_METRICS - PredictionTime.ms:627180.55|#ModelName:check_extract__m5,Level:Model|#hostname:923a5f450f81,requestID:5fb4cd9e-c949-42da-95c6-7c2341937d09,timestamp:1723024836
2024-08-07T15:30:36,402 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:6.27300829202E8|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723024836
2024-08-07T15:30:36,402 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:276.834|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723024836
2024-08-07T15:30:36,402 [DEBUG] W-9007-check_extract__m5_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 276834, Backend time ns: 627300829054
2024-08-07T15:30:36,402 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723024836
2024-08-07T15:30:36,402 [INFO ] W-9007-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 627300
2024-08-07T15:30:36,402 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723024836
2024-08-07T15:30:36,404 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - post_processing for attempt 0
2024-08-07T15:30:36,404 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723024836404
2024-08-07T15:30:36,404 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723024836404
2024-08-07T15:30:36,407 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723024836
2024-08-07T15:30:40,978 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Completed Request: 9d4020ddc03548b0bc32c0a7bdce788b
2024-08-07T15:30:40,978 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Postprocess time: 0:00:04.568286
2024-08-07T15:30:40,979 [INFO ] W-9005-check_extract__post_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:4570.42|#ModelName:check_extract__post_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723024840,73ce4b41-6e1f-4576-975b-92049aec3988, pattern=[METRICS]
2024-08-07T15:30:40,979 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:4575011.426|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723024840
2024-08-07T15:30:40,979 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:4570.42|#ModelName:check_extract__post_processing,Level:Model|#hostname:923a5f450f81,requestID:73ce4b41-6e1f-4576-975b-92049aec3988,timestamp:1723024840
2024-08-07T15:30:40,979 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:141.258|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723024840
2024-08-07T15:30:40,979 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 141258, Backend time ns: 4575349182
2024-08-07T15:30:40,979 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723024840
2024-08-07T15:30:40,979 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 4575
2024-08-07T15:30:40,979 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723024840
2024-08-07T15:30:40,981 [INFO ] wf-manager-thread-11 ACCESS_LOG - /10.0.0.2:41610 "POST /wfpredict/check_extract HTTP/1.1" 200 946790
2024-08-07T15:30:40,981 [INFO ] wf-manager-thread-11 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:Unknown,timestamp:1723024840
2024-08-07T15:30:40,983 [ERROR] epollEventLoopGroup-3-13 org.pytorch.serve.http.HttpRequestHandler -
io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
2024-08-07T16:04:04,841 [INFO ] epollEventLoopGroup-3-14 org.pytorch.serve.http.api.rest.InferenceRequestHandler - org.pytorch.serve.util.messages.RequestInput@36126adb
2024-08-07T16:04:04,844 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - pre_processing for attempt 0
2024-08-07T16:04:04,844 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723026844844
2024-08-07T16:04:04,844 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723026844844
2024-08-07T16:04:04,846 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723026844
2024-08-07T16:04:04,947 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input: /var/data/CHP_HOME/input7/input.txt
2024-08-07T16:04:04,947 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Output: /var/data/CHP_HOME/output7/11a446eabfd5407c994f0e5659752a82
2024-08-07T16:04:04,947 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input file read successfully!
2024-08-07T16:04:04,947 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.100390
2024-08-07T16:04:04,947 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:100.46|#ModelName:check_extract__pre_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723026844,01d7e9e8-7477-45fb-a6c8-7e69da33ad90, pattern=[METRICS]
2024-08-07T16:04:04,948 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:100.46|#ModelName:check_extract__pre_processing,Level:Model|#hostname:923a5f450f81,requestID:01d7e9e8-7477-45fb-a6c8-7e69da33ad90,timestamp:1723026844
2024-08-07T16:04:05,005 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:160661.126|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723026845
2024-08-07T16:04:05,005 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:169.603|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723026845
2024-08-07T16:04:05,005 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 169603, Backend time ns: 160898702
2024-08-07T16:04:05,005 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723026845
2024-08-07T16:04:05,005 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 161
2024-08-07T16:04:05,005 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723026845
2024-08-07T16:04:05,008 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m1 for attempt 0
2024-08-07T16:04:05,009 [DEBUG] W-9001-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723026845009
2024-08-07T16:04:05,009 [INFO ] W-9001-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723026845009
2024-08-07T16:04:05,015 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_LOG - Backend received inference at: 1723026845
2024-08-07T16:04:08,044 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.315683
2024-08-07T16:04:08,044 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_LOG - Infer time: 0:00:02.712102
2024-08-07T16:04:08,044 [INFO ] W-9001-check_extract__m1_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:3028.03|#ModelName:check_extract__m1,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723026848,129703d0-c4bc-4da7-a902-9603602a6b5b, pattern=[METRICS]
2024-08-07T16:04:08,044 [INFO ] W-9001-check_extract__m1_1.0-stdout MODEL_METRICS - PredictionTime.ms:3028.03|#ModelName:check_extract__m1,Level:Model|#hostname:923a5f450f81,requestID:129703d0-c4bc-4da7-a902-9603602a6b5b,timestamp:1723026848
2024-08-07T16:04:08,336 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:3327341.822|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723026848
2024-08-07T16:04:08,336 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:277.969|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723026848
2024-08-07T16:04:08,337 [DEBUG] W-9001-check_extract__m1_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 277969, Backend time ns: 3327701673
2024-08-07T16:04:08,337 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723026848
2024-08-07T16:04:08,337 [INFO ] W-9001-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3327
2024-08-07T16:04:08,337 [INFO ] W-9001-check_extract__m1_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723026848
2024-08-07T16:04:08,354 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - m2 for attempt 0
2024-08-07T16:04:08,354 [DEBUG] W-9003-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723026848354
2024-08-07T16:04:08,355 [INFO ] W-9003-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723026848355
2024-08-07T16:04:08,429 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_LOG - Backend received inference at: 1723026848
2024-08-07T16:04:13,920 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.086645
2024-08-07T16:04:13,920 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_LOG - Infer time: 0:00:04.402856
2024-08-07T16:04:13,920 [INFO ] W-9003-check_extract__m2_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:5489.81|#ModelName:check_extract__m2,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723026853,c9190205-bfda-47f2-8361-2a4f98c80fe4, pattern=[METRICS]
2024-08-07T16:04:13,920 [INFO ] W-9003-check_extract__m2_1.0-stdout MODEL_METRICS - PredictionTime.ms:5489.81|#ModelName:check_extract__m2,Level:Model|#hostname:923a5f450f81,requestID:c9190205-bfda-47f2-8361-2a4f98c80fe4,timestamp:1723026853
2024-08-07T16:04:14,279 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:5924532.953|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723026854
2024-08-07T16:04:14,279 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:168.498|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723026854
2024-08-07T16:04:14,279 [DEBUG] W-9003-check_extract__m2_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 168498, Backend time ns: 5924825896
2024-08-07T16:04:14,279 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723026854
2024-08-07T16:04:14,279 [INFO ] W-9003-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 5924
2024-08-07T16:04:14,279 [INFO ] W-9003-check_extract__m2_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723026854
2024-08-07T16:04:14,305 [INFO ] wf-execute-thread-3 org.pytorch.serve.ensemble.DagExecutor - Invoking - m3 for attempt 0
2024-08-07T16:04:14,306 [DEBUG] W-9011-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723026854306
2024-08-07T16:04:14,306 [INFO ] W-9011-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723026854306
2024-08-07T16:04:14,415 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_LOG - Backend received inference at: 1723026854
2024-08-07T16:07:41,998 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.130793
2024-08-07T16:07:41,998 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_LOG - Infer time: 0:03:26.451150
2024-08-07T16:07:41,999 [INFO ] W-9011-check_extract__m3_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:207582.41|#ModelName:check_extract__m3,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723027061,a954db3b-1468-4d67-a792-ef16ab96bc13, pattern=[METRICS]
2024-08-07T16:07:41,999 [INFO ] W-9011-check_extract__m3_1.0-stdout MODEL_METRICS - PredictionTime.ms:207582.41|#ModelName:check_extract__m3,Level:Model|#hostname:923a5f450f81,requestID:a954db3b-1468-4d67-a792-ef16ab96bc13,timestamp:1723027061
2024-08-07T16:07:42,318 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:2.08011702188E8|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723027062
2024-08-07T16:07:42,318 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:360.54|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723027062
2024-08-07T16:07:42,318 [DEBUG] W-9011-check_extract__m3_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 360540, Backend time ns: 208011663670
2024-08-07T16:07:42,318 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723027062
2024-08-07T16:07:42,318 [INFO ] W-9011-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 208011
2024-08-07T16:07:42,318 [INFO ] W-9011-check_extract__m3_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723027062
2024-08-07T16:07:42,334 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - m4 for attempt 0
2024-08-07T16:07:42,334 [DEBUG] W-9009-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723027062334
2024-08-07T16:07:42,334 [INFO ] W-9009-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723027062334
2024-08-07T16:07:42,437 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_LOG - Backend received inference at: 1723027062
2024-08-07T16:08:20,015 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.160484
2024-08-07T16:08:20,016 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_LOG - Infer time: 0:00:36.416367
2024-08-07T16:08:20,016 [INFO ] W-9009-check_extract__m4_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:37577.53|#ModelName:check_extract__m4,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723027100,d121de22-7e4d-4deb-bc0b-c13602b1e3f1, pattern=[METRICS]
2024-08-07T16:08:20,016 [INFO ] W-9009-check_extract__m4_1.0-stdout MODEL_METRICS - PredictionTime.ms:37577.53|#ModelName:check_extract__m4,Level:Model|#hostname:923a5f450f81,requestID:d121de22-7e4d-4deb-bc0b-c13602b1e3f1,timestamp:1723027100
2024-08-07T16:08:20,375 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:3.8041115662E7|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723027100
2024-08-07T16:08:20,375 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:238.406|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723027100
2024-08-07T16:08:20,375 [DEBUG] W-9009-check_extract__m4_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 238406, Backend time ns: 38041172134
2024-08-07T16:08:20,375 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723027100
2024-08-07T16:08:20,375 [INFO ] W-9009-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 38041
2024-08-07T16:08:20,375 [INFO ] W-9009-check_extract__m4_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723027100
2024-08-07T16:08:20,391 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m5 for attempt 0
2024-08-07T16:08:20,391 [DEBUG] W-9006-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723027100391
2024-08-07T16:08:20,391 [INFO ] W-9006-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723027100391
2024-08-07T16:08:20,483 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_LOG - Backend received inference at: 1723027100
2024-08-07T16:16:04,729 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.096433
2024-08-07T16:16:04,730 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_LOG - Infer time: 0:07:43.147696
2024-08-07T16:16:04,730 [INFO ] W-9006-check_extract__m5_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:464244.71|#ModelName:check_extract__m5,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723027564,59f64c10-07ad-4d92-a702-a7bccd3340a6, pattern=[METRICS]
2024-08-07T16:16:04,731 [INFO ] W-9006-check_extract__m5_1.0-stdout MODEL_METRICS - PredictionTime.ms:464244.71|#ModelName:check_extract__m5,Level:Model|#hostname:923a5f450f81,requestID:59f64c10-07ad-4d92-a702-a7bccd3340a6,timestamp:1723027564
2024-08-07T16:16:04,776 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:4.64384934065E8|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723027564
2024-08-07T16:16:04,776 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:239.243|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723027564
2024-08-07T16:16:04,777 [DEBUG] W-9006-check_extract__m5_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 239243, Backend time ns: 464385417366
2024-08-07T16:16:04,777 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723027564
2024-08-07T16:16:04,777 [INFO ] W-9006-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 464385
2024-08-07T16:16:04,777 [INFO ] W-9006-check_extract__m5_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723027564
2024-08-07T16:16:04,779 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - post_processing for attempt 0
2024-08-07T16:16:04,780 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723027564780
2024-08-07T16:16:04,780 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723027564780
2024-08-07T16:16:04,785 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723027564
2024-08-07T16:16:09,365 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Completed Request: 11a446eabfd5407c994f0e5659752a82
2024-08-07T16:16:09,365 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Postprocess time: 0:00:04.577773
2024-08-07T16:16:09,365 [INFO ] W-9005-check_extract__post_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:4579.49|#ModelName:check_extract__post_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723027569,1c931619-976e-4b0b-ba2a-854a8cf02e98, pattern=[METRICS]
2024-08-07T16:16:09,365 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:4579.49|#ModelName:check_extract__post_processing,Level:Model|#hostname:923a5f450f81,requestID:1c931619-976e-4b0b-ba2a-854a8cf02e98,timestamp:1723027569
2024-08-07T16:16:09,366 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:4586090.406|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723027569
2024-08-07T16:16:09,366 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:380.778|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723027569
2024-08-07T16:16:09,366 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 380778, Backend time ns: 4586272283
2024-08-07T16:16:09,366 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723027569
2024-08-07T16:16:09,366 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 4585
2024-08-07T16:16:09,366 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723027569
2024-08-07T16:16:09,368 [INFO ] wf-manager-thread-12 ACCESS_LOG - /10.0.0.2:53182 "POST /wfpredict/check_extract HTTP/1.1" 200 724527
2024-08-07T16:16:09,369 [INFO ] wf-manager-thread-12 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:Unknown,timestamp:1723027569
2024-08-07T16:34:18,833 [INFO ] epollEventLoopGroup-3-15 org.pytorch.serve.http.api.rest.InferenceRequestHandler - org.pytorch.serve.util.messages.RequestInput@19e3aa37
2024-08-07T16:34:18,834 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - pre_processing for attempt 0
2024-08-07T16:34:18,835 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723028658835
2024-08-07T16:34:18,835 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723028658835
2024-08-07T16:34:18,837 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723028658
2024-08-07T16:34:18,936 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input: /var/data/CHP_HOME/input7/input.txt
2024-08-07T16:34:18,936 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Output: /var/data/CHP_HOME/output7/0a872a17906d4aa382cb5e2640951514
2024-08-07T16:34:18,937 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Input file read successfully!
2024-08-07T16:34:18,937 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.098752
2024-08-07T16:34:18,937 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:98.81|#ModelName:check_extract__pre_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723028658,6928bdb4-4b6c-4b8f-8794-c570ca074521, pattern=[METRICS]
2024-08-07T16:34:18,937 [INFO ] W-9000-check_extract__pre_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:98.81|#ModelName:check_extract__pre_processing,Level:Model|#hostname:923a5f450f81,requestID:6928bdb4-4b6c-4b8f-8794-c570ca074521,timestamp:1723028658
2024-08-07T16:34:18,993 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:158288.437|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723028658
2024-08-07T16:34:18,993 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:214.593|#model_name:check_extract__pre_processing,model_version:default|#hostname:Unknown,timestamp:1723028658
2024-08-07T16:34:18,993 [DEBUG] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 214593, Backend time ns: 158427405
2024-08-07T16:34:18,993 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723028658
2024-08-07T16:34:18,993 [INFO ] W-9000-check_extract__pre_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 158
2024-08-07T16:34:18,994 [INFO ] W-9000-check_extract__pre_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723028658
2024-08-07T16:34:18,995 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m1 for attempt 0
2024-08-07T16:34:18,995 [DEBUG] W-9002-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723028658995
2024-08-07T16:34:18,996 [INFO ] W-9002-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723028658996
2024-08-07T16:34:19,002 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_LOG - Backend received inference at: 1723028659
2024-08-07T16:34:21,993 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_LOG - Preprocess time: 0:00:00.302335
2024-08-07T16:34:21,993 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_LOG - Infer time: 0:00:02.688238
2024-08-07T16:34:21,993 [INFO ] W-9002-check_extract__m1_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:2990.88|#ModelName:check_extract__m1,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723028661,0336d887-ebf5-449d-839c-49684f713572, pattern=[METRICS]
2024-08-07T16:34:21,993 [INFO ] W-9002-check_extract__m1_1.0-stdout MODEL_METRICS - PredictionTime.ms:2990.88|#ModelName:check_extract__m1,Level:Model|#hostname:923a5f450f81,requestID:0336d887-ebf5-449d-839c-49684f713572,timestamp:1723028661
2024-08-07T16:34:22,315 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:3319745.907|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723028662
2024-08-07T16:34:22,315 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:349.007|#model_name:check_extract__m1,model_version:default|#hostname:Unknown,timestamp:1723028662
2024-08-07T16:34:22,315 [DEBUG] W-9002-check_extract__m1_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 349007, Backend time ns: 3319816327
2024-08-07T16:34:22,315 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723028662
2024-08-07T16:34:22,315 [INFO ] W-9002-check_extract__m1_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3319
2024-08-07T16:34:22,315 [INFO ] W-9002-check_extract__m1_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723028662
2024-08-07T16:34:22,584 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - m2 for attempt 0
2024-08-07T16:34:22,584 [DEBUG] W-9004-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723028662584
2024-08-07T16:34:22,584 [INFO ] W-9004-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723028662584
2024-08-07T16:34:22,692 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_LOG - Backend received inference at: 1723028662
2024-08-07T16:34:28,025 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.088335
2024-08-07T16:34:28,026 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_LOG - Infer time: 0:00:04.244538
2024-08-07T16:34:28,026 [INFO ] W-9004-check_extract__m2_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:5333.16|#ModelName:check_extract__m2,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723028668,4e15bc69-232b-49c5-be5f-e353c1b606d5, pattern=[METRICS]
2024-08-07T16:34:28,026 [INFO ] W-9004-check_extract__m2_1.0-stdout MODEL_METRICS - PredictionTime.ms:5333.16|#ModelName:check_extract__m2,Level:Model|#hostname:923a5f450f81,requestID:4e15bc69-232b-49c5-be5f-e353c1b606d5,timestamp:1723028668
2024-08-07T16:34:28,342 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:5757389.712|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723028668
2024-08-07T16:34:28,342 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:167.477|#model_name:check_extract__m2,model_version:default|#hostname:Unknown,timestamp:1723028668
2024-08-07T16:34:28,342 [DEBUG] W-9004-check_extract__m2_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 167477, Backend time ns: 5757896002
2024-08-07T16:34:28,342 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723028668
2024-08-07T16:34:28,342 [INFO ] W-9004-check_extract__m2_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 5757
2024-08-07T16:34:28,342 [INFO ] W-9004-check_extract__m2_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723028668
2024-08-07T16:34:28,357 [INFO ] wf-execute-thread-3 org.pytorch.serve.ensemble.DagExecutor - Invoking - m3 for attempt 0
2024-08-07T16:34:28,358 [DEBUG] W-9010-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723028668358
2024-08-07T16:34:28,358 [INFO ] W-9010-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723028668358
2024-08-07T16:34:28,463 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_LOG - Backend received inference at: 1723028668
2024-08-07T16:38:35,959 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.043072
2024-08-07T16:38:35,959 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_LOG - Infer time: 0:04:06.452525
2024-08-07T16:38:35,959 [INFO ] W-9010-check_extract__m3_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:247496.08|#ModelName:check_extract__m3,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723028915,046ff5d3-fa52-41f4-985d-21c8424b728c, pattern=[METRICS]
2024-08-07T16:38:35,959 [INFO ] W-9010-check_extract__m3_1.0-stdout MODEL_METRICS - PredictionTime.ms:247496.08|#ModelName:check_extract__m3,Level:Model|#hostname:923a5f450f81,requestID:046ff5d3-fa52-41f4-985d-21c8424b728c,timestamp:1723028915
2024-08-07T16:38:36,307 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:2.47949571144E8|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723028916
2024-08-07T16:38:36,308 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:228.55|#model_name:check_extract__m3,model_version:default|#hostname:Unknown,timestamp:1723028916
2024-08-07T16:38:36,308 [DEBUG] W-9010-check_extract__m3_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 228550, Backend time ns: 247949833007
2024-08-07T16:38:36,308 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723028916
2024-08-07T16:38:36,308 [INFO ] W-9010-check_extract__m3_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 247949
2024-08-07T16:38:36,308 [INFO ] W-9010-check_extract__m3_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723028916
2024-08-07T16:38:36,323 [INFO ] wf-execute-thread-0 org.pytorch.serve.ensemble.DagExecutor - Invoking - m4 for attempt 0
2024-08-07T16:38:36,324 [DEBUG] W-9008-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723028916324
2024-08-07T16:38:36,324 [INFO ] W-9008-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723028916324
2024-08-07T16:38:36,749 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_LOG - Backend received inference at: 1723028916
2024-08-07T16:39:23,225 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.106842
2024-08-07T16:39:23,225 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_LOG - Infer time: 0:00:45.365943
2024-08-07T16:39:23,225 [INFO ] W-9008-check_extract__m4_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:46473.34|#ModelName:check_extract__m4,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723028963,f5cd976d-46b9-4cf5-a386-1ef002bd5e69, pattern=[METRICS]
2024-08-07T16:39:23,225 [INFO ] W-9008-check_extract__m4_1.0-stdout MODEL_METRICS - PredictionTime.ms:46473.34|#ModelName:check_extract__m4,Level:Model|#hostname:923a5f450f81,requestID:f5cd976d-46b9-4cf5-a386-1ef002bd5e69,timestamp:1723028963
2024-08-07T16:39:23,585 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:4.7260935749E7|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723028963
2024-08-07T16:39:23,585 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:221.039|#model_name:check_extract__m4,model_version:default|#hostname:Unknown,timestamp:1723028963
2024-08-07T16:39:23,585 [DEBUG] W-9008-check_extract__m4_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 221039, Backend time ns: 47261147959
2024-08-07T16:39:23,585 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723028963
2024-08-07T16:39:23,585 [INFO ] W-9008-check_extract__m4_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 47260
2024-08-07T16:39:23,585 [INFO ] W-9008-check_extract__m4_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723028963
2024-08-07T16:39:23,599 [INFO ] wf-execute-thread-1 org.pytorch.serve.ensemble.DagExecutor - Invoking - m5 for attempt 0
2024-08-07T16:39:23,599 [DEBUG] W-9007-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723028963599
2024-08-07T16:39:23,599 [INFO ] W-9007-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723028963599
2024-08-07T16:39:23,677 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_LOG - Backend received inference at: 1723028963
2024-08-07T16:46:50,931 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_LOG - Preprocess time: 0:00:01.032119
2024-08-07T16:46:50,932 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_LOG - Infer time: 0:07:26.221158
2024-08-07T16:46:50,932 [INFO ] W-9007-check_extract__m5_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:447253.92|#ModelName:check_extract__m5,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723029410,cfc30c4a-c3d6-4430-8e82-af36d037c035, pattern=[METRICS]
2024-08-07T16:46:50,932 [INFO ] W-9007-check_extract__m5_1.0-stdout MODEL_METRICS - PredictionTime.ms:447253.92|#ModelName:check_extract__m5,Level:Model|#hostname:923a5f450f81,requestID:cfc30c4a-c3d6-4430-8e82-af36d037c035,timestamp:1723029410
2024-08-07T16:46:50,979 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:4.47380147346E8|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723029410
2024-08-07T16:46:50,979 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:135.887|#model_name:check_extract__m5,model_version:default|#hostname:Unknown,timestamp:1723029410
2024-08-07T16:46:50,980 [DEBUG] W-9007-check_extract__m5_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 135887, Backend time ns: 447380380929
2024-08-07T16:46:50,980 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723029410
2024-08-07T16:46:50,980 [INFO ] W-9007-check_extract__m5_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 447380
2024-08-07T16:46:50,980 [INFO ] W-9007-check_extract__m5_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:1.0|#Level:Host|#hostname:Unknown,timestamp:1723029410
2024-08-07T16:46:50,980 [INFO ] wf-execute-thread-2 org.pytorch.serve.ensemble.DagExecutor - Invoking - post_processing for attempt 0
2024-08-07T16:46:50,980 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1723029410980
2024-08-07T16:46:50,981 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1723029410981
2024-08-07T16:46:50,982 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Backend received inference at: 1723029410
2024-08-07T16:46:55,750 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Completed Request: 0a872a17906d4aa382cb5e2640951514
2024-08-07T16:46:55,750 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_LOG - Postprocess time: 0:00:04.763652
2024-08-07T16:46:55,750 [INFO ] W-9005-check_extract__post_processing_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:4766.88|#ModelName:check_extract__post_processing,Level:Model|#type:GAUGE|#hostname:923a5f450f81,1723029415,3d8c36dd-8e54-4604-bb42-398c6059d7b5, pattern=[METRICS]
2024-08-07T16:46:55,750 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:4769944.246|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723029415
2024-08-07T16:46:55,751 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:190.781|#model_name:check_extract__post_processing,model_version:default|#hostname:Unknown,timestamp:1723029415
2024-08-07T16:46:55,751 [DEBUG] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 190781, Backend time ns: 4770299452
2024-08-07T16:46:55,751 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:Unknown,timestamp:1723029415
2024-08-07T16:46:55,751 [INFO ] W-9005-check_extract__post_processing_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 4769
2024-08-07T16:46:55,751 [INFO ] W-9005-check_extract__post_processing_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:Unknown,timestamp:1723029415
2024-08-07T16:46:55,798 [INFO ] W-9005-check_extract__post_processing_1.0-stdout MODEL_METRICS - PredictionTime.ms:4766.88|#ModelName:check_extract__post_processing,Level:Model|#hostname:923a5f450f81,requestID:3d8c36dd-8e54-4604-bb42-398c6059d7b5,timestamp:1723029415
2024-08-07T16:46:55,799 [INFO ] wf-manager-thread-13 ACCESS_LOG - /10.0.0.2:42952 "POST /wfpredict/check_extract HTTP/1.1" 200 756967
2024-08-07T16:46:55,800 [INFO ] wf-manager-thread-13 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:Unknown,timestamp:1723029415
Hi, it might be that you're running into your 30 minutes timeout. In you log you have two jobs before the error: 3a9b19e383924f06a7c5659f76e3e22c and c7344a73d9174685bf830a02bc2f7bf4 Which take 12 and 18 minutes respectively. As you have a job queue of 1 I assume that c7344 is queued waiting for 12 minutes until 3a9b1 is finished. It then gets processed for 18 minutes and right before its done the client resets the connection due to timeout and when TS wants to send the response it notices that the connection has been reset.
Apologies, I forgot to mention that, after your first comment I've modified config file to this and logs are also after that:
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081
metrics_address=http://0.0.0.0:8082
# management
# number_of_netty_threads=16
# netty_client_threads=16
default_response_timeout=10000000
# default_workers_per_model=2
max_request_size=65535000
max_response_size=65535000
load_models=
model_store=model_store
workflow_store=wf_store
enable_envvars_config=true
disable_system_metrics=true
job_queue_size=3
also, arrival time of both the requests are different, so timeout is not the issue here (third column in the image is actually the time request was sent to TorchServe) :
3a9b19e383924f06a7c5659f76e3e22c - 2024-08-07T13:24:03 IST (from the logs) == 03:54:03 EST (from the image attached) c7344a73d9174685bf830a02bc2f7bf4 - 2024-08-07T13:50:52 IST (from the logs) == 04:20:52 EST (from the image attached)
and when 2 requests simultaneously arrives it will be split between 2 nodes in the swarm, so I don't think the timeout is the actual issue here. Row number 35 & 36 from the image is the example of 2 requests received almost at the same time and processed without any connection issues.
Closing this. For now, changing the minibatch size from 100 to 30 resolved this issue. Thank you.