load model failed???
I'm testing torchserve using the tutorial provided in the link: https://github.com/pytorch/serve/tree/master/examples/image_classifier/mnist but it doesn't work (p39pyt171) D:\pythonProject\torchserve-master\examples\image_classifier\mnist>torchserve --start --model-store E:/Ai_Resources/torch_serve_models --models mnist=mnist.mar --ts-config config.properties
(p39pyt171) D:\pythonProject\torchserve-master\examples\image_classifier\mnist>WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance. 2023-09-07T10:28:35,667 [WARN ] main org.pytorch.serve.util.ConfigManager - Your torchserve instance can access any URL to load models. When deploying to production, make sure to limit the set of allowed_urls in config.properties 2023-09-07T10:28:35,673 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Initializing plugins manager... 2023-09-07T10:28:35,759 [INFO ] main org.pytorch.serve.metrics.configuration.MetricConfiguration - Successfully loaded metrics configuration from D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml 2023-09-07T10:28:36,216 [INFO ] main org.pytorch.serve.ModelServer - Torchserve version: 0.8.2 TS Home: D:\pyenvironment\envs\p39pyt171\Lib\site-packages Current directory: D:\pythonProject\torchserve-master\examples\image_classifier\mnist Temp directory: C:\Users\dell\AppData\Local\Temp Metrics config path: D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml Number of GPUs: 0 Number of CPUs: 40 Max heap size: 30688 M Python executable: D:\pyenvironment\envs\p39pyt171\python.exe Config file: config.properties Inference address: http://127.0.0.1:8080 Management address: http://127.0.0.1:8081 Metrics address: http://127.0.0.1:8082 Model Store: E:\Ai_Resources\torch_serve_models Initial Models: mnist=mnist.mar Log dir: D:\pythonProject\torchserve-master\examples\image_classifier\mnist\logs Metrics dir: D:\pythonProject\torchserve-master\examples\image_classifier\mnist\logs Netty threads: 0 Netty client threads: 0 Default workers per model: 40 Blacklist Regex: N/A Maximum Response Size: 6553500 Maximum Request Size: 6553500 Limit Maximum Image Pixels: true Prefer direct buffer: false Allowed Urls: [file://.|http(s)?://.] Custom python dependency for model allowed: false Enable metrics API: true Metrics mode: log Disable system metrics: false Workflow Store: E:\Ai_Resources\torch_serve_models Model config: N/A 2023-09-07T10:28:36,227 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Loading snapshot serializer plugin... 2023-09-07T10:28:36,256 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: mnist.mar 2023-09-07T10:28:36,460 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model mnist 2023-09-07T10:28:36,461 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model mnist 2023-09-07T10:28:36,465 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model mnist loaded. 2023-09-07T10:28:36,468 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: mnist, count: 40 2023-09-07T10:28:36,487 [DEBUG] W-9012-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9012, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,488 [DEBUG] W-9001-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9001, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,510 [DEBUG] W-9006-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9006, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,511 [DEBUG] W-9018-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9018, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,510 [DEBUG] W-9000-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9000, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,510 [DEBUG] W-9015-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9015, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,511 [DEBUG] W-9019-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9019, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,510 [DEBUG] W-9014-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9014, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,488 [DEBUG] W-9007-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9007, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,510 [DEBUG] W-9003-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9003, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,510 [DEBUG] W-9011-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9011, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,488 [DEBUG] W-9002-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9002, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,510 [DEBUG] W-9013-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9013, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,510 [DEBUG] W-9017-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9017, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,488 [DEBUG] W-9010-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9010, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,557 [DEBUG] W-9034-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9034, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,547 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: NioServerSocketChannel. 2023-09-07T10:28:36,541 [DEBUG] W-9033-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9033, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,536 [DEBUG] W-9032-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9032, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,522 [DEBUG] W-9031-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9031, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,516 [DEBUG] W-9030-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9030, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,515 [DEBUG] W-9024-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9024, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,514 [DEBUG] W-9021-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9021, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,515 [DEBUG] W-9029-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9029, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,514 [DEBUG] W-9023-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9023, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,514 [DEBUG] W-9022-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9022, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,514 [DEBUG] W-9025-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9025, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,514 [DEBUG] W-9026-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9026, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,514 [DEBUG] W-9028-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9028, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,514 [DEBUG] W-9027-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9027, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,511 [DEBUG] W-9016-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9016, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,513 [DEBUG] W-9020-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9020, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,510 [DEBUG] W-9009-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9009, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,510 [DEBUG] W-9008-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9008, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,510 [DEBUG] W-9004-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9004, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,510 [DEBUG] W-9005-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9005, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,648 [DEBUG] W-9038-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9038, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,648 [DEBUG] W-9039-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9039, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,616 [DEBUG] W-9037-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9037, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,605 [DEBUG] W-9036-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9036, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,602 [DEBUG] W-9035-mnist_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9035, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-07T10:28:36,957 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://127.0.0.1:8080 2023-09-07T10:28:36,958 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: NioServerSocketChannel. 2023-09-07T10:28:36,963 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://127.0.0.1:8081 2023-09-07T10:28:36,964 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: NioServerSocketChannel. 2023-09-07T10:28:36,967 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://127.0.0.1:8082 Model server started. 2023-09-07T10:28:37,678 [WARN ] pool-3-thread-1 org.pytorch.serve.metrics.MetricCollector - worker pid is not available yet. 2023-09-07T10:28:37,825 [INFO ] W-9012-mnist_1.0-stdout MODEL_LOG - Listening on addr:port: 127.0.0.1:9012 2023-09-07T10:28:37,844 [INFO ] W-9012-mnist_1.0-stdout MODEL_LOG - Successfully loaded D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml. 2023-09-07T10:28:37,846 [INFO ] W-9012-mnist_1.0-stdout MODEL_LOG - [PID]12360 2023-09-07T10:28:37,846 [INFO ] W-9012-mnist_1.0-stdout MODEL_LOG - Torch worker started. 2023-09-07T10:28:37,847 [DEBUG] W-9012-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - W-9012-mnist_1.0 State change null -> WORKER_STARTED 2023-09-07T10:28:37,847 [INFO ] W-9012-mnist_1.0-stdout MODEL_LOG - Python runtime: 3.9.17 2023-09-07T10:28:37,852 [INFO ] W-9012-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9012 2023-09-07T10:28:37,853 [INFO ] W-9014-mnist_1.0-stdout MODEL_LOG - Listening on addr:port: 127.0.0.1:9014 2023-09-07T10:28:37,867 [INFO ] W-9012-mnist_1.0-stdout MODEL_LOG - Connection accepted: ('127.0.0.1', 9012). 2023-09-07T10:28:37,872 [INFO ] W-9012-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694053717872 2023-09-07T10:28:37,876 [INFO ] W-9014-mnist_1.0-stdout MODEL_LOG - Successfully loaded D:\pye
What does you log/model_log.log say?
Hi @joe20002000 Looking at the logs, the model seems to be in the process of loading. Could you please share the error message you are seeing.
Thank you for your reply. I have solved the problem, which is the absence of the Python dependency. yoloV8 example can work well. Now I am currently trying to emulate the YOLOv8 example by using my custom_handler.py. However, I am still facing issues with loading the model when starting my YOLOv7 service. The running environment is a virtual environment for training YOLOv7, with PyTorch 1.7.1, running on Windows 10. The TorchServe version being used is 0.8.2.
Here is my custom_handler.py: import logging import os from collections import Counter
import torch from torchvision import transforms
from ultralytics import YOLO
from models.experimental import attempt_load
from ts.torch_handler.object_detector import ObjectDetector
logger = logging.getLogger(name)
try: import torch_xla.core.xla_model as xm
XLA_AVAILABLE = True
except ImportError as error: XLA_AVAILABLE = False
class Yolov8Handler(ObjectDetector): image_processing = transforms.Compose( [transforms.Resize(320), transforms.CenterCrop(320), transforms.ToTensor()] )
def __init__(self):
super(Yolov8Handler, self).__init__()
def initialize(self, context):
# Set device type
# if torch.cuda.is_available():
# self.device = torch.device("cuda")
# elif XLA_AVAILABLE:
# self.device = xm.xla_device()
# else:
self.device = torch.device("cpu")
# Load the model
properties = context.system_properties
self.manifest = context.manifest
model_dir = properties.get("model_dir")
self.model_pt_path = None
if "serializedFile" in self.manifest["model"]:
serialized_file = self.manifest["model"]["serializedFile"]
self.model_pt_path = os.path.join(model_dir, serialized_file)
self.model = self._load_torchscript_model(self.model_pt_path)
logger.debug("Model file %s loaded successfully", self.model_pt_path)
self.initialized = True
def _load_torchscript_model(self, model_pt_path):
"""Loads the PyTorch model and returns the NN model object.
Args:
model_pt_path (str): denotes the path of the model file.
Returns:
(NN Model Object) : Loads the model object.
"""
# TODO: remove this method if https://github.com/pytorch/text/issues/1793 gets resolved
# model = YOLO(model_pt_path)
# model.to(self.device)
device = torch.device("cpu") # Change to "cuda" if you want to use GPU
model = attempt_load(model_pt_path, map_location=device)
return model
def postprocess(self, res):
output = []
for data in res:
classes = data.boxes.cls.tolist()
names = data.names
# Map to class names
classes = map(lambda cls: names[int(cls)], classes)
# Get a count of objects detected
result = Counter(classes)
output.append(dict(result))
return output
The model was saved using "torch.save(ckpt, best)", so I have the model named "best.pt". This model file can be loaded when separately running custom_handler.py. Then I tried to start yoloV7 service using the following commands: (1) torch-model-archiver --model-name yolov7n --version 1.0 --serialized-file D:/pythonProject/yolov7-main/runs/train/big_2/weights/best.pt --handler custom_handler_yoloV7.py --export-path E:/Ai_Resources/torch_serve_models (2) torchserve --start --model-store E:/Ai_Resources/torch_serve_models --ncs --models yolov7n=yolov7n.mar
Parts of ts_log.log are following: ===============#1====================== 2023-09-08T17:39:43,644 [WARN ] main org.pytorch.serve.util.ConfigManager - Your torchserve instance can access any URL to load models. When deploying to production, make sure to limit the set of allowed_urls in config.properties 2023-09-08T17:39:43,644 [WARN ] main org.pytorch.serve.util.ConfigManager - Your torchserve instance can access any URL to load models. When deploying to production, make sure to limit the set of allowed_urls in config.properties 2023-09-08T17:39:43,649 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Initializing plugins manager... 2023-09-08T17:39:43,649 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Initializing plugins manager... 2023-09-08T17:39:43,732 [INFO ] main org.pytorch.serve.metrics.configuration.MetricConfiguration - Successfully loaded metrics configuration from D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml 2023-09-08T17:39:43,732 [INFO ] main org.pytorch.serve.metrics.configuration.MetricConfiguration - Successfully loaded metrics configuration from D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml 2023-09-08T17:39:44,167 [INFO ] main org.pytorch.serve.ModelServer - Torchserve version: 0.8.2 TS Home: D:\pyenvironment\envs\p39pyt171\Lib\site-packages Current directory: D:\pythonProject\yolov7-main Temp directory: C:\Users\dell\AppData\Local\Temp Metrics config path: D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml Number of GPUs: 0 Number of CPUs: 40 Max heap size: 30688 M Python executable: D:\pyenvironment\envs\p39pyt171\python.exe Config file: N/A Inference address: http://127.0.0.1:8080 Management address: http://127.0.0.1:8081 Metrics address: http://127.0.0.1:8082 Model Store: E:\Ai_Resources\torch_serve_models Initial Models: yolov7n=yolov7n.mar Log dir: D:\pythonProject\yolov7-main\logs Metrics dir: D:\pythonProject\yolov7-main\logs Netty threads: 0 Netty client threads: 0 Default workers per model: 40 Blacklist Regex: N/A Maximum Response Size: 6553500 Maximum Request Size: 6553500 Limit Maximum Image Pixels: true Prefer direct buffer: false Allowed Urls: [file://.|http(s)?://.] Custom python dependency for model allowed: false Enable metrics API: true Metrics mode: log Disable system metrics: false Workflow Store: E:\Ai_Resources\torch_serve_models Model config: N/A 2023-09-08T17:39:44,167 [INFO ] main org.pytorch.serve.ModelServer - Torchserve version: 0.8.2 TS Home: D:\pyenvironment\envs\p39pyt171\Lib\site-packages Current directory: D:\pythonProject\yolov7-main Temp directory: C:\Users\dell\AppData\Local\Temp Metrics config path: D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml Number of GPUs: 0 Number of CPUs: 40 Max heap size: 30688 M Python executable: D:\pyenvironment\envs\p39pyt171\python.exe Config file: N/A Inference address: http://127.0.0.1:8080 Management address: http://127.0.0.1:8081 Metrics address: http://127.0.0.1:8082 Model Store: E:\Ai_Resources\torch_serve_models Initial Models: yolov7n=yolov7n.mar Log dir: D:\pythonProject\yolov7-main\logs Metrics dir: D:\pythonProject\yolov7-main\logs Netty threads: 0 Netty client threads: 0 Default workers per model: 40 Blacklist Regex: N/A Maximum Response Size: 6553500 Maximum Request Size: 6553500 Limit Maximum Image Pixels: true Prefer direct buffer: false Allowed Urls: [file://.|http(s)?://.] Custom python dependency for model allowed: false Enable metrics API: true Metrics mode: log Disable system metrics: false Workflow Store: E:\Ai_Resources\torch_serve_models Model config: N/A 2023-09-08T17:39:44,190 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Loading snapshot serializer plugin... 2023-09-08T17:39:44,190 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Loading snapshot serializer plugin... 2023-09-08T17:39:44,213 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: yolov7n.mar 2023-09-08T17:39:44,213 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: yolov7n.mar 2023-09-08T17:39:44,559 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model yolov7n 2023-09-08T17:39:44,559 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model yolov7n 2023-09-08T17:39:44,560 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model yolov7n 2023-09-08T17:39:44,560 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model yolov7n 2023-09-08T17:39:44,561 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model yolov7n loaded. 2023-09-08T17:39:44,561 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model yolov7n loaded. 2023-09-08T17:39:44,561 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: yolov7n, count: 40 2023-09-08T17:39:44,561 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: yolov7n, count: 40 2023-09-08T17:39:44,605 [DEBUG] W-9008-yolov7n_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9008, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-08T17:39:44,605 [DEBUG] W-9004-yolov7n_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9004, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-08T17:39:44,606 [DEBUG] W-9009-yolov7n_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9009, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-08T17:39:44,605 [DEBUG] W-9005-yolov7n_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9005, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-08T17:39:44,605 [DEBUG] W-9002-yolov7n_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9002, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] ======================#1 END=====================
=====================#2======================== \site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9029, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-08T17:39:44,610 [DEBUG] W-9036-yolov7n_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9036, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-08T17:39:44,609 [DEBUG] W-9032-yolov7n_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [D:\pyenvironment\envs\p39pyt171\python.exe, D:\pyenvironment\envs\p39pyt171\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9032, --metrics-config, D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml] 2023-09-08T17:39:44,854 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://127.0.0.1:8080 2023-09-08T17:39:44,854 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://127.0.0.1:8080 2023-09-08T17:39:44,855 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: NioServerSocketChannel. 2023-09-08T17:39:44,855 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: NioServerSocketChannel. 2023-09-08T17:39:44,866 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://127.0.0.1:8081 2023-09-08T17:39:44,866 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://127.0.0.1:8081 2023-09-08T17:39:44,867 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: NioServerSocketChannel. 2023-09-08T17:39:44,867 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: NioServerSocketChannel. 2023-09-08T17:39:44,875 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://127.0.0.1:8082 2023-09-08T17:39:44,875 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://127.0.0.1:8082 2023-09-08T17:39:45,718 [WARN ] pool-3-thread-1 org.pytorch.serve.metrics.MetricCollector - worker pid is not available yet. 2023-09-08T17:39:45,718 [WARN ] pool-3-thread-1 org.pytorch.serve.metrics.MetricCollector - worker pid is not available yet. 2023-09-08T17:39:45,945 [INFO ] W-9013-yolov7n_1.0-stdout MODEL_LOG - Listening on addr:port: 127.0.0.1:9013 2023-09-08T17:39:45,945 [INFO ] W-9004-yolov7n_1.0-stdout MODEL_LOG - Listening on addr:port: 127.0.0.1:9004 2023-09-08T17:39:45,966 [INFO ] W-9013-yolov7n_1.0-stdout MODEL_LOG - Successfully loaded D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml. 2023-09-08T17:39:45,967 [INFO ] W-9004-yolov7n_1.0-stdout MODEL_LOG - Successfully loaded D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml. 2023-09-08T17:39:45,967 [INFO ] W-9013-yolov7n_1.0-stdout MODEL_LOG - [PID]28252 2023-09-08T17:39:45,972 [DEBUG] W-9013-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9013-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:45,968 [INFO ] W-9004-yolov7n_1.0-stdout MODEL_LOG - [PID]11552 2023-09-08T17:39:45,972 [DEBUG] W-9004-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9004-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:45,972 [DEBUG] W-9013-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9013-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:45,972 [INFO ] W-9004-yolov7n_1.0-stdout MODEL_LOG - Torch worker started. 2023-09-08T17:39:45,972 [DEBUG] W-9004-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9004-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:45,972 [INFO ] W-9013-yolov7n_1.0-stdout MODEL_LOG - Torch worker started. 2023-09-08T17:39:45,974 [INFO ] W-9004-yolov7n_1.0-stdout MODEL_LOG - Python runtime: 3.9.17 2023-09-08T17:39:45,975 [INFO ] W-9007-yolov7n_1.0-stdout MODEL_LOG - Listening on addr:port: 127.0.0.1:9007 2023-09-08T17:39:45,978 [INFO ] W-9004-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9004 2023-09-08T17:39:45,978 [INFO ] W-9013-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9013 2023-09-08T17:39:45,975 [INFO ] W-9013-yolov7n_1.0-stdout MODEL_LOG - Python runtime: 3.9.17 2023-09-08T17:39:45,978 [INFO ] W-9013-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9013 2023-09-08T17:39:45,978 [INFO ] W-9004-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9004 2023-09-08T17:39:46,004 [INFO ] W-9007-yolov7n_1.0-stdout MODEL_LOG - Successfully loaded D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml. 2023-09-08T17:39:46,005 [INFO ] W-9007-yolov7n_1.0-stdout MODEL_LOG - [PID]28544 2023-09-08T17:39:46,006 [DEBUG] W-9007-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9007-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:46,006 [INFO ] W-9007-yolov7n_1.0-stdout MODEL_LOG - Torch worker started. 2023-09-08T17:39:46,006 [DEBUG] W-9007-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9007-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:46,007 [INFO ] W-9007-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9007 2023-09-08T17:39:46,007 [INFO ] W-9007-yolov7n_1.0-stdout MODEL_LOG - Python runtime: 3.9.17 2023-09-08T17:39:46,007 [INFO ] W-9007-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9007 2023-09-08T17:39:46,010 [INFO ] W-9004-yolov7n_1.0-stdout MODEL_LOG - Connection accepted: ('127.0.0.1', 9004). 2023-09-08T17:39:46,010 [INFO ] W-9013-yolov7n_1.0-stdout MODEL_LOG - Connection accepted: ('127.0.0.1', 9013). 2023-09-08T17:39:46,016 [INFO ] W-9004-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986016 2023-09-08T17:39:46,016 [INFO ] W-9013-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986016 2023-09-08T17:39:46,016 [INFO ] W-9004-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986016 2023-09-08T17:39:46,017 [INFO ] W-9007-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986017 2023-09-08T17:39:46,016 [INFO ] W-9013-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986016 2023-09-08T17:39:46,017 [INFO ] W-9007-yolov7n_1.0-stdout MODEL_LOG - Connection accepted: ('127.0.0.1', 9007). 2023-09-08T17:39:46,017 [INFO ] W-9007-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986017 2023-09-08T17:39:46,021 [INFO ] W-9006-yolov7n_1.0-stdout MODEL_LOG - Listening on addr:port: 127.0.0.1:9006 2023-09-08T17:39:46,032 [INFO ] pool-3-thread-1 TS_METRICS - CPUUtilization.Percent:82.9|#Level:Host|#hostname:ONION,timestamp:1694165986 2023-09-08T17:39:46,034 [INFO ] pool-3-thread-1 TS_METRICS - DiskAvailable.Gigabytes:413.3251533508301|#Level:Host|#hostname:ONION,timestamp:1694165986 2023-09-08T17:39:46,035 [INFO ] pool-3-thread-1 TS_METRICS - DiskUsage.Gigabytes:186.67581939697266|#Level:Host|#hostname:ONION,timestamp:1694165986 2023-09-08T17:39:46,037 [INFO ] pool-3-thread-1 TS_METRICS - DiskUtilization.Percent:31.1|#Level:Host|#hostname:ONION,timestamp:1694165986 2023-09-08T17:39:46,037 [INFO ] pool-3-thread-1 TS_METRICS - MemoryAvailable.Megabytes:238575.84375|#Level:Host|#hostname:ONION,timestamp:1694165986 2023-09-08T17:39:46,038 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUsed.Megabytes:22204.62109375|#Level:Host|#hostname:ONION,timestamp:1694165986 2023-09-08T17:39:46,039 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUtilization.Percent:8.5|#Level:Host|#hostname:ONION,timestamp:1694165986 2023-09-08T17:39:46,044 [INFO ] W-9006-yolov7n_1.0-stdout MODEL_LOG - Successfully loaded D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml. 2023-09-08T17:39:46,045 [INFO ] W-9006-yolov7n_1.0-stdout MODEL_LOG - [PID]29284 2023-09-08T17:39:46,045 [DEBUG] W-9006-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9006-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:46,045 [INFO ] W-9006-yolov7n_1.0-stdout MODEL_LOG - Torch worker started. 2023-09-08T17:39:46,045 [DEBUG] W-9006-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9006-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:46,047 [INFO ] W-9006-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9006 2023-09-08T17:39:46,046 [INFO ] W-9006-yolov7n_1.0-stdout MODEL_LOG - Python runtime: 3.9.17 2023-09-08T17:39:46,047 [INFO ] W-9006-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9006 2023-09-08T17:39:46,052 [INFO ] W-9006-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986052 2023-09-08T17:39:46,052 [INFO ] W-9006-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986052 2023-09-08T17:39:46,052 [INFO ] W-9006-yolov7n_1.0-stdout MODEL_LOG - Connection accepted: ('127.0.0.1', 9006). 2023-09-08T17:39:46,066 [INFO ] W-9013-yolov7n_1.0-stdout MODEL_LOG - model_name: yolov7n, batchSize: 1 2023-09-08T17:39:46,066 [INFO ] W-9004-yolov7n_1.0-stdout MODEL_LOG - model_name: yolov7n, batchSize: 1 2023-09-08T17:39:46,066 [INFO ] W-9007-yolov7n_1.0-stdout MODEL_LOG - model_name: yolov7n, batchSize: 1 2023-09-08T17:39:46,071 [INFO ] W-9006-yolov7n_1.0-stdout MODEL_LOG - model_name: yolov7n, batchSize: 1 2023-09-08T17:39:46,126 [INFO ] W-9011-yolov7n_1.0-stdout MODEL_LOG - Listening on addr:port: 127.0.0.1:9011 2023-09-08T17:39:46,149 [INFO ] W-9011-yolov7n_1.0-stdout MODEL_LOG - Successfully loaded D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml. 2023-09-08T17:39:46,150 [INFO ] W-9011-yolov7n_1.0-stdout MODEL_LOG - [PID]26644 2023-09-08T17:39:46,153 [DEBUG] W-9011-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9011-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:46,152 [INFO ] W-9011-yolov7n_1.0-stdout MODEL_LOG - Torch worker started. 2023-09-08T17:39:46,153 [DEBUG] W-9011-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9011-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:46,159 [INFO ] W-9011-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9011 2023-09-08T17:39:46,153 [INFO ] W-9011-yolov7n_1.0-stdout MODEL_LOG - Python runtime: 3.9.17 2023-09-08T17:39:46,159 [INFO ] W-9011-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9011 2023-09-08T17:39:46,164 [INFO ] W-9011-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986164 2023-09-08T17:39:46,164 [INFO ] W-9011-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986164 2023-09-08T17:39:46,164 [INFO ] W-9011-yolov7n_1.0-stdout MODEL_LOG - Connection accepted: ('127.0.0.1', 9011). 2023-09-08T17:39:46,168 [INFO ] W-9025-yolov7n_1.0-stdout MODEL_LOG - Listening on addr:port: 127.0.0.1:9025 2023-09-08T17:39:46,180 [INFO ] W-9011-yolov7n_1.0-stdout MODEL_LOG - model_name: yolov7n, batchSize: 1 2023-09-08T17:39:46,190 [INFO ] W-9025-yolov7n_1.0-stdout MODEL_LOG - Successfully loaded D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml. 2023-09-08T17:39:46,191 [INFO ] W-9025-yolov7n_1.0-stdout MODEL_LOG - [PID]29376 2023-09-08T17:39:46,191 [DEBUG] W-9025-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9025-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:46,191 [INFO ] W-9025-yolov7n_1.0-stdout MODEL_LOG - Torch worker started. 2023-09-08T17:39:46,191 [DEBUG] W-9025-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9025-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:46,193 [INFO ] W-9025-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9025 2023-09-08T17:39:46,192 [INFO ] W-9025-yolov7n_1.0-stdout MODEL_LOG - Python runtime: 3.9.17 2023-09-08T17:39:46,193 [INFO ] W-9025-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9025 2023-09-08T17:39:46,198 [INFO ] W-9025-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986198 2023-09-08T17:39:46,198 [INFO ] W-9025-yolov7n_1.0-stdout MODEL_LOG - Connection accepted: ('127.0.0.1', 9025). 2023-09-08T17:39:46,198 [INFO ] W-9025-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986198 2023-09-08T17:39:46,213 [INFO ] W-9010-yolov7n_1.0-stdout MODEL_LOG - Listening on addr:port: 127.0.0.1:9010 2023-09-08T17:39:46,215 [INFO ] W-9025-yolov7n_1.0-stdout MODEL_LOG - model_name: yolov7n, batchSize: 1 2023-09-08T17:39:46,236 [INFO ] W-9010-yolov7n_1.0-stdout MODEL_LOG - Successfully loaded D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml. 2023-09-08T17:39:46,237 [INFO ] W-9010-yolov7n_1.0-stdout MODEL_LOG - [PID]29304 2023-09-08T17:39:46,238 [DEBUG] W-9010-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9010-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:46,238 [INFO ] W-9010-yolov7n_1.0-stdout MODEL_LOG - Torch worker started. 2023-09-08T17:39:46,238 [DEBUG] W-9010-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9010-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:46,239 [INFO ] W-9010-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9010 2023-09-08T17:39:46,239 [INFO ] W-9010-yolov7n_1.0-stdout MODEL_LOG - Python runtime: 3.9.17 2023-09-08T17:39:46,239 [INFO ] W-9010-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9010 2023-09-08T17:39:46,243 [INFO ] W-9012-yolov7n_1.0-stdout MODEL_LOG - Listening on addr:port: 127.0.0.1:9012 2023-09-08T17:39:46,244 [INFO ] W-9010-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986244 2023-09-08T17:39:46,244 [INFO ] W-9010-yolov7n_1.0-stdout MODEL_LOG - Connection accepted: ('127.0.0.1', 9010). 2023-09-08T17:39:46,244 [INFO ] W-9010-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986244 2023-09-08T17:39:46,262 [INFO ] W-9010-yolov7n_1.0-stdout MODEL_LOG - model_name: yolov7n, batchSize: 1 2023-09-08T17:39:46,265 [INFO ] W-9012-yolov7n_1.0-stdout MODEL_LOG - Successfully loaded D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml. 2023-09-08T17:39:46,269 [INFO ] W-9012-yolov7n_1.0-stdout MODEL_LOG - [PID]22116 2023-09-08T17:39:46,269 [DEBUG] W-9012-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9012-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:46,269 [INFO ] W-9012-yolov7n_1.0-stdout MODEL_LOG - Torch worker started. 2023-09-08T17:39:46,269 [DEBUG] W-9012-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9012-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:46,271 [INFO ] W-9012-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9012 2023-09-08T17:39:46,270 [INFO ] W-9012-yolov7n_1.0-stdout MODEL_LOG - Python runtime: 3.9.17 2023-09-08T17:39:46,271 [INFO ] W-9012-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9012 2023-09-08T17:39:46,273 [INFO ] W-9007-yolov7n_1.0-stdout MODEL_LOG - qjz======================qjz================qjz0 2023-09-08T17:39:46,273 [INFO ] W-9007-yolov7n_1.0-stdout MODEL_LOG - custom_handler_yoloV7.py 2023-09-08T17:39:46,274 [INFO ] W-9007-yolov7n_1.0-stdout MODEL_LOG - custom_handler_yoloV7.py 2023-09-08T17:39:46,275 [INFO ] W-9012-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986275 2023-09-08T17:39:46,275 [INFO ] W-9007-yolov7n_1.0-stdout MODEL_LOG - qjz======================qjz================qjz1 2023-09-08T17:39:46,276 [INFO ] nioEventLoopGroup-5-3 org.pytorch.serve.wlm.WorkerThread - 9007 Worker disconnected. WORKER_STARTED 2023-09-08T17:39:46,275 [INFO ] W-9012-yolov7n_1.0-stdout MODEL_LOG - Connection accepted: ('127.0.0.1', 9012). 2023-09-08T17:39:46,277 [INFO ] nioEventLoopGroup-5-4 org.pytorch.serve.wlm.WorkerThread - 9006 Worker disconnected. WORKER_STARTED 2023-09-08T17:39:46,281 [INFO ] nioEventLoopGroup-5-2 org.pytorch.serve.wlm.WorkerThread - 9013 Worker disconnected. WORKER_STARTED 2023-09-08T17:39:46,276 [INFO ] nioEventLoopGroup-5-3 org.pytorch.serve.wlm.WorkerThread - 9007 Worker disconnected. WORKER_STARTED 2023-09-08T17:39:46,284 [DEBUG] W-9007-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED 2023-09-08T17:39:46,285 [INFO ] nioEventLoopGroup-5-1 org.pytorch.serve.wlm.WorkerThread - 9004 Worker disconnected. WORKER_STARTED 2023-09-08T17:39:46,276 [INFO ] W-9007-yolov7n_1.0-stdout MODEL_LOG - Backend worker process died. 2023-09-08T17:39:46,275 [INFO ] W-9012-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1694165986275 2023-09-08T17:39:46,285 [INFO ] nioEventLoopGroup-5-1 org.pytorch.serve.wlm.WorkerThread - 9004 Worker disconnected. WORKER_STARTED 2023-09-08T17:39:46,290 [DEBUG] W-9004-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED 2023-09-08T17:39:46,284 [INFO ] W-9004-yolov7n_1.0-stdout MODEL_LOG - qjz======================qjz================qjz0 2023-09-08T17:39:46,284 [DEBUG] W-9007-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED 2023-09-08T17:39:46,281 [INFO ] W-9020-yolov7n_1.0-stdout MODEL_LOG - Listening on addr:port: 127.0.0.1:9020 2023-09-08T17:39:46,281 [INFO ] nioEventLoopGroup-5-2 org.pytorch.serve.wlm.WorkerThread - 9013 Worker disconnected. WORKER_STARTED 2023-09-08T17:39:46,293 [DEBUG] W-9013-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED 2023-09-08T17:39:46,280 [INFO ] W-9013-yolov7n_1.0-stdout MODEL_LOG - qjz======================qjz================qjz0 2023-09-08T17:39:46,277 [INFO ] nioEventLoopGroup-5-4 org.pytorch.serve.wlm.WorkerThread - 9006 Worker disconnected. WORKER_STARTED 2023-09-08T17:39:46,296 [DEBUG] W-9006-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED 2023-09-08T17:39:46,276 [INFO ] W-9006-yolov7n_1.0-stdout MODEL_LOG - qjz======================qjz================qjz0 2023-09-08T17:39:46,294 [INFO ] W-9013-yolov7n_1.0-stdout MODEL_LOG - custom_handler_yoloV7.py 2023-09-08T17:39:46,293 [DEBUG] W-9013-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED 2023-09-08T17:39:46,290 [INFO ] W-9004-yolov7n_1.0-stdout MODEL_LOG - custom_handler_yoloV7.py 2023-09-08T17:39:46,290 [DEBUG] W-9004-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED 2023-09-08T17:39:46,287 [INFO ] W-9007-yolov7n_1.0-stdout MODEL_LOG - Traceback (most recent call last): 2023-09-08T17:39:46,299 [INFO ] W-9004-yolov7n_1.0-stdout MODEL_LOG - custom_handler_yoloV7.py 2023-09-08T17:39:46,298 [INFO ] W-9013-yolov7n_1.0-stdout MODEL_LOG - custom_handler_yoloV7.py 2023-09-08T17:39:46,297 [INFO ] W-9006-yolov7n_1.0-stdout MODEL_LOG - custom_handler_yoloV7.py 2023-09-08T17:39:46,296 [DEBUG] W-9006-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED 2023-09-08T17:39:46,302 [INFO ] W-9013-yolov7n_1.0-stdout MODEL_LOG - qjz======================qjz================qjz1 2023-09-08T17:39:46,302 [INFO ] W-9004-yolov7n_1.0-stdout MODEL_LOG - qjz======================qjz================qjz1 2023-09-08T17:39:46,301 [INFO ] W-9007-yolov7n_1.0-stdout MODEL_LOG - File "D:\pyenvironment\envs\p39pyt171\lib\site-packages\ts\model_loader.py", line 101, in load 2023-09-08T17:39:46,305 [INFO ] W-9013-yolov7n_1.0-stdout MODEL_LOG - Backend worker process died. 2023-09-08T17:39:46,303 [INFO ] W-9006-yolov7n_1.0-stdout MODEL_LOG - custom_handler_yoloV7.py 2023-09-08T17:39:46,307 [INFO ] W-9007-yolov7n_1.0-stdout MODEL_LOG - module, function_name = self._load_handler_file(handler) 2023-09-08T17:39:46,300 [DEBUG] W-9004-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException: null at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1679) ~[?:?] at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:435) ~[?:?] at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:213) [model-server.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at java.lang.Thread.run(Thread.java:833) [?:?] 2023-09-08T17:39:46,304 [DEBUG] W-9006-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException: null at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1679) ~[?:?] at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:435) ~[?:?] at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:213) [model-server.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at java.lang.Thread.run(Thread.java:833) [?:?] 2023-09-08T17:39:46,298 [DEBUG] W-9013-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException: null at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1679) ~[?:?] at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:435) ~[?:?] at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:213) [model-server.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at java.lang.Thread.run(Thread.java:833) [?:?] 2023-09-08T17:39:46,292 [DEBUG] W-9007-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException: null at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1679) ~[?:?] at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:435) ~[?:?] at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:213) [model-server.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at java.lang.Thread.run(Thread.java:833) [?:?] 2023-09-08T17:39:46,306 [INFO ] W-9020-yolov7n_1.0-stdout MODEL_LOG - Successfully loaded D:\pyenvironment\envs\p39pyt171\Lib\site-packages/ts/configs/metrics.yaml. 2023-09-08T17:39:46,306 [INFO ] W-9004-yolov7n_1.0-stdout MODEL_LOG - Backend worker process died. 2023-09-08T17:39:46,306 [INFO ] W-9012-yolov7n_1.0-stdout MODEL_LOG - model_name: yolov7n, batchSize: 1 2023-09-08T17:39:46,320 [INFO ] W-9020-yolov7n_1.0-stdout MODEL_LOG - [PID]24088 2023-09-08T17:39:46,314 [INFO ] W-9015-yolov7n_1.0-stdout MODEL_LOG - Listening on addr:port: 127.0.0.1:9015 2023-09-08T17:39:46,329 [DEBUG] W-9020-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9020-yolov7n_1.0 State change null -> WORKER_STARTED 2023-09-08T17:39:46,292 [DEBUG] W-9007-yolov7n_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException: null =========================#2 END=========================