DeepSpeed-MII icon indicating copy to clipboard operation
DeepSpeed-MII copied to clipboard

Server crashes whilst trying to spin up Mistral

Open harryjulian opened this issue 2 years ago • 8 comments

As described, trying to spin up a mistralai/Mistral-7B-v0.1 using the examples in the README. This is on an EC2 g5.xlarge.

import mii
client = mii.serve("mistralai/Mistral-7B-v0.1")
response = client.generate("Deepspeed is", max_new_tokens=128)
print(response.response)

Dependencies installed in a virtual environment.

Full Log:

[2023-11-06 10:46:25,260] [INFO] [server.py:97:__init__] Hostfile /job/hostfile not found, creating hostfile.
[2023-11-06 10:46:25,260] [INFO] [server.py:97:__init__] Hostfile /job/hostfile not found, creating hostfile.
[2023-11-06 10:46:25,266] [INFO] [server.py:166:_launch_server_process] msg_server launch: ['deepspeed', '-i', 'localhost:0', '--master_port', '29500', '--master_addr', 'localhost', '--no_ssh_check', '--no_local_rank', '--no_python', '/usr/bin/python3', '-m', 'mii.launch.multi_gpu_server', '--deployment-name', 'mistralai/Mistral-7B-v0.1-mii-deployment', '--load-balancer-port', '50050', '--restful-gateway-port', '51080', '--server-port', '50051', '--zmq-port', '25555', '--model-config', 'eyJtb2RlbF9uYW1lX29yX3BhdGgiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItdjAuMSIsICJ0b2tlbml6ZXIiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItdjAuMSIsICJ0YXNrIjogInRleHQtZ2VuZXJhdGlvbiIsICJ0ZW5zb3JfcGFyYWxsZWwiOiAxLCAiaW5mZXJlbmNlX2VuZ2luZV9jb25maWciOiB7InRlbnNvcl9wYXJhbGxlbCI6IHsidHBfc2l6ZSI6IDF9LCAic3RhdGVfbWFuYWdlciI6IHsibWF4X3RyYWNrZWRfc2VxdWVuY2VzIjogMjA0OCwgIm1heF9yYWdnZWRfYmF0Y2hfc2l6ZSI6IDc2OCwgIm1heF9yYWdnZWRfc2VxdWVuY2VfY291bnQiOiA1MTIsICJtYXhfY29udGV4dCI6IDgxOTIsICJtZW1vcnlfY29uZmlnIjogeyJtb2RlIjogInJlc2VydmUiLCAic2l6ZSI6IDEwMDAwMDAwMDB9LCAib2ZmbG9hZCI6IGZhbHNlfX0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgInptcV9wb3J0X251bWJlciI6IDI1NTU1LCAicmVwbGljYV9udW0iOiAxLCAicmVwbGljYV9jb25maWdzIjogW3siaG9zdG5hbWUiOiAibG9jYWxob3N0IiwgInRlbnNvcl9wYXJhbGxlbF9wb3J0cyI6IFs1MDA1MV0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgImdwdV9pbmRpY2VzIjogWzBdLCAiem1xX3BvcnQiOiAyNTU1NX1dLCAibWF4X2xlbmd0aCI6IG51bGwsICJhbGxfcmFua19vdXRwdXQiOiBmYWxzZSwgInN5bmNfZGVidWciOiBmYWxzZSwgInByb2ZpbGVfbW9kZWxfdGltZSI6IGZhbHNlfQ==']
[2023-11-06 10:46:25,266] [INFO] [server.py:166:_launch_server_process] msg_server launch: ['deepspeed', '-i', 'localhost:0', '--master_port', '29500', '--master_addr', 'localhost', '--no_ssh_check', '--no_local_rank', '--no_python', '/usr/bin/python3', '-m', 'mii.launch.multi_gpu_server', '--deployment-name', 'mistralai/Mistral-7B-v0.1-mii-deployment', '--load-balancer-port', '50050', '--restful-gateway-port', '51080', '--server-port', '50051', '--zmq-port', '25555', '--model-config', 'eyJtb2RlbF9uYW1lX29yX3BhdGgiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItdjAuMSIsICJ0b2tlbml6ZXIiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItdjAuMSIsICJ0YXNrIjogInRleHQtZ2VuZXJhdGlvbiIsICJ0ZW5zb3JfcGFyYWxsZWwiOiAxLCAiaW5mZXJlbmNlX2VuZ2luZV9jb25maWciOiB7InRlbnNvcl9wYXJhbGxlbCI6IHsidHBfc2l6ZSI6IDF9LCAic3RhdGVfbWFuYWdlciI6IHsibWF4X3RyYWNrZWRfc2VxdWVuY2VzIjogMjA0OCwgIm1heF9yYWdnZWRfYmF0Y2hfc2l6ZSI6IDc2OCwgIm1heF9yYWdnZWRfc2VxdWVuY2VfY291bnQiOiA1MTIsICJtYXhfY29udGV4dCI6IDgxOTIsICJtZW1vcnlfY29uZmlnIjogeyJtb2RlIjogInJlc2VydmUiLCAic2l6ZSI6IDEwMDAwMDAwMDB9LCAib2ZmbG9hZCI6IGZhbHNlfX0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgInptcV9wb3J0X251bWJlciI6IDI1NTU1LCAicmVwbGljYV9udW0iOiAxLCAicmVwbGljYV9jb25maWdzIjogW3siaG9zdG5hbWUiOiAibG9jYWxob3N0IiwgInRlbnNvcl9wYXJhbGxlbF9wb3J0cyI6IFs1MDA1MV0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgImdwdV9pbmRpY2VzIjogWzBdLCAiem1xX3BvcnQiOiAyNTU1NX1dLCAibWF4X2xlbmd0aCI6IG51bGwsICJhbGxfcmFua19vdXRwdXQiOiBmYWxzZSwgInN5bmNfZGVidWciOiBmYWxzZSwgInByb2ZpbGVfbW9kZWxfdGltZSI6IGZhbHNlfQ==']
[2023-11-06 10:46:25,274] [INFO] [server.py:166:_launch_server_process] msg_server launch: ['/usr/bin/python3', '-m', 'mii.launch.multi_gpu_server', '--deployment-name', 'mistralai/Mistral-7B-v0.1-mii-deployment', '--load-balancer-port', '50050', '--restful-gateway-port', '51080', '--load-balancer', '--model-config', 'eyJtb2RlbF9uYW1lX29yX3BhdGgiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItdjAuMSIsICJ0b2tlbml6ZXIiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItdjAuMSIsICJ0YXNrIjogInRleHQtZ2VuZXJhdGlvbiIsICJ0ZW5zb3JfcGFyYWxsZWwiOiAxLCAiaW5mZXJlbmNlX2VuZ2luZV9jb25maWciOiB7InRlbnNvcl9wYXJhbGxlbCI6IHsidHBfc2l6ZSI6IDF9LCAic3RhdGVfbWFuYWdlciI6IHsibWF4X3RyYWNrZWRfc2VxdWVuY2VzIjogMjA0OCwgIm1heF9yYWdnZWRfYmF0Y2hfc2l6ZSI6IDc2OCwgIm1heF9yYWdnZWRfc2VxdWVuY2VfY291bnQiOiA1MTIsICJtYXhfY29udGV4dCI6IDgxOTIsICJtZW1vcnlfY29uZmlnIjogeyJtb2RlIjogInJlc2VydmUiLCAic2l6ZSI6IDEwMDAwMDAwMDB9LCAib2ZmbG9hZCI6IGZhbHNlfX0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgInptcV9wb3J0X251bWJlciI6IDI1NTU1LCAicmVwbGljYV9udW0iOiAxLCAicmVwbGljYV9jb25maWdzIjogW3siaG9zdG5hbWUiOiAibG9jYWxob3N0IiwgInRlbnNvcl9wYXJhbGxlbF9wb3J0cyI6IFs1MDA1MV0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgImdwdV9pbmRpY2VzIjogWzBdLCAiem1xX3BvcnQiOiAyNTU1NX1dLCAibWF4X2xlbmd0aCI6IG51bGwsICJhbGxfcmFua19vdXRwdXQiOiBmYWxzZSwgInN5bmNfZGVidWciOiBmYWxzZSwgInByb2ZpbGVfbW9kZWxfdGltZSI6IGZhbHNlfQ==']
[2023-11-06 10:46:25,274] [INFO] [server.py:166:_launch_server_process] msg_server launch: ['/usr/bin/python3', '-m', 'mii.launch.multi_gpu_server', '--deployment-name', 'mistralai/Mistral-7B-v0.1-mii-deployment', '--load-balancer-port', '50050', '--restful-gateway-port', '51080', '--load-balancer', '--model-config', 'eyJtb2RlbF9uYW1lX29yX3BhdGgiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItdjAuMSIsICJ0b2tlbml6ZXIiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItdjAuMSIsICJ0YXNrIjogInRleHQtZ2VuZXJhdGlvbiIsICJ0ZW5zb3JfcGFyYWxsZWwiOiAxLCAiaW5mZXJlbmNlX2VuZ2luZV9jb25maWciOiB7InRlbnNvcl9wYXJhbGxlbCI6IHsidHBfc2l6ZSI6IDF9LCAic3RhdGVfbWFuYWdlciI6IHsibWF4X3RyYWNrZWRfc2VxdWVuY2VzIjogMjA0OCwgIm1heF9yYWdnZWRfYmF0Y2hfc2l6ZSI6IDc2OCwgIm1heF9yYWdnZWRfc2VxdWVuY2VfY291bnQiOiA1MTIsICJtYXhfY29udGV4dCI6IDgxOTIsICJtZW1vcnlfY29uZmlnIjogeyJtb2RlIjogInJlc2VydmUiLCAic2l6ZSI6IDEwMDAwMDAwMDB9LCAib2ZmbG9hZCI6IGZhbHNlfX0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgInptcV9wb3J0X251bWJlciI6IDI1NTU1LCAicmVwbGljYV9udW0iOiAxLCAicmVwbGljYV9jb25maWdzIjogW3siaG9zdG5hbWUiOiAibG9jYWxob3N0IiwgInRlbnNvcl9wYXJhbGxlbF9wb3J0cyI6IFs1MDA1MV0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgImdwdV9pbmRpY2VzIjogWzBdLCAiem1xX3BvcnQiOiAyNTU1NX1dLCAibWF4X2xlbmd0aCI6IG51bGwsICJhbGxfcmFua19vdXRwdXQiOiBmYWxzZSwgInN5bmNfZGVidWciOiBmYWxzZSwgInByb2ZpbGVfbW9kZWxfdGltZSI6IGZhbHNlfQ==']

[2023-11-06 10:46:27,056] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-11-06 10:46:27,084] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-11-06 10:46:28,425] [WARNING] [runner.py:203:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2023-11-06 10:46:28,431] [INFO] [runner.py:570:main] cmd = /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMF19 --master_addr=127.0.0.1 --master_port=29500 --no_python --no_local_rank --enable_each_rank_log=None /usr/bin/python3 -m mii.launch.multi_gpu_server --deployment-name mistralai/Mistral-7B-v0.1-mii-deployment --load-balancer-port 50050 --restful-gateway-port 51080 --server-port 50051 --zmq-port 25555 --model-config eyJtb2RlbF9uYW1lX29yX3BhdGgiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItdjAuMSIsICJ0b2tlbml6ZXIiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItdjAuMSIsICJ0YXNrIjogInRleHQtZ2VuZXJhdGlvbiIsICJ0ZW5zb3JfcGFyYWxsZWwiOiAxLCAiaW5mZXJlbmNlX2VuZ2luZV9jb25maWciOiB7InRlbnNvcl9wYXJhbGxlbCI6IHsidHBfc2l6ZSI6IDF9LCAic3RhdGVfbWFuYWdlciI6IHsibWF4X3RyYWNrZWRfc2VxdWVuY2VzIjogMjA0OCwgIm1heF9yYWdnZWRfYmF0Y2hfc2l6ZSI6IDc2OCwgIm1heF9yYWdnZWRfc2VxdWVuY2VfY291bnQiOiA1MTIsICJtYXhfY29udGV4dCI6IDgxOTIsICJtZW1vcnlfY29uZmlnIjogeyJtb2RlIjogInJlc2VydmUiLCAic2l6ZSI6IDEwMDAwMDAwMDB9LCAib2ZmbG9hZCI6IGZhbHNlfX0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgInptcV9wb3J0X251bWJlciI6IDI1NTU1LCAicmVwbGljYV9udW0iOiAxLCAicmVwbGljYV9jb25maWdzIjogW3siaG9zdG5hbWUiOiAibG9jYWxob3N0IiwgInRlbnNvcl9wYXJhbGxlbF9wb3J0cyI6IFs1MDA1MV0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgImdwdV9pbmRpY2VzIjogWzBdLCAiem1xX3BvcnQiOiAyNTU1NX1dLCAibWF4X2xlbmd0aCI6IG51bGwsICJhbGxfcmFua19vdXRwdXQiOiBmYWxzZSwgInN5bmNfZGVidWciOiBmYWxzZSwgInByb2ZpbGVfbW9kZWxfdGltZSI6IGZhbHNlfQ==
Starting load balancer on port: 50050
About to start server
Started
[2023-11-06 10:46:30,132] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-11-06 10:46:30,286] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 10:46:30,286] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 10:46:31,420] [INFO] [launch.py:145:main] WORLD INFO DICT: {'localhost': [0]}
[2023-11-06 10:46:31,420] [INFO] [launch.py:151:main] nnodes=1, num_local_procs=1, node_rank=0
[2023-11-06 10:46:31,420] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0]})
[2023-11-06 10:46:31,420] [INFO] [launch.py:163:main] dist_world_size=1
[2023-11-06 10:46:31,420] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0
[2023-11-06 10:46:33,140] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-11-06 10:46:34,838] [INFO] [comm.py:637:init_distributed] cdb=None
[2023-11-06 10:46:34,838] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /usr/lib/python3.8/runpy.py:194 in _run_module_as_main                                           │
│                                                                                                  │
│   191 │   main_globals = sys.modules["__main__"].__dict__                                        │
│   192 │   if alter_argv:                                                                         │
│   193 │   │   sys.argv[0] = mod_spec.origin                                                      │
│ ❱ 194 │   return _run_code(code, main_globals, None,                                             │
│   195 │   │   │   │   │    "__main__", mod_spec)                                                 │
│   196                                                                                            │
│   197 def run_module(mod_name, init_globals=None,                                                │
│                                                                                                  │
│ /usr/lib/python3.8/runpy.py:87 in _run_code                                                      │
│                                                                                                  │
│    84 │   │   │   │   │      __loader__ = loader,                                                │
│    85 │   │   │   │   │      __package__ = pkg_name,                                             │
│    86 │   │   │   │   │      __spec__ = mod_spec)                                                │
│ ❱  87 │   exec(code, run_globals)                                                                │
│    88 │   return run_globals                                                                     │
│    89                                                                                            │
│    90 def _run_module_code(code, init_globals=None,                                              │
│                                                                                                  │
│ /home/ubuntu/.local/lib/python3.8/site-packages/mii/launch/multi_gpu_server.py:97 in <module>    │
│                                                                                                  │
│   94                                                                                             │
│   95 if __name__ == "__main__":                                                                  │
│   96 │   # python -m mii.launch.multi_gpu_server                                                 │
│ ❱ 97 │   main()                                                                                  │
│   98                                                                                             │
│                                                                                                  │
│ /home/ubuntu/.local/lib/python3.8/site-packages/mii/launch/multi_gpu_server.py:90 in main        │
│                                                                                                  │
│   87 │   │   local_rank = int(os.getenv("LOCAL_RANK", "0"))                                      │
│   88 │   │   port = args.server_port + local_rank                                                │
│   89 │   │   args.model_config.zmq_port_number = args.zmq_port                                   │
│ ❱ 90 │   │   inference_pipeline = async_pipeline(args.model_config)                              │
│   91 │   │   print(f"Starting server on port: {port}")                                           │
│   92 │   │   serve_inference(inference_pipeline, port)                                           │
│   93                                                                                             │
│                                                                                                  │
│ /home/ubuntu/.local/lib/python3.8/site-packages/mii/pipeline.py:41 in async_pipeline             │
│                                                                                                  │
│   38                                                                                             │
│   39                                                                                             │
│   40 def async_pipeline(model_config: ModelConfig) -> MIIAsyncPipeline:                          │
│ ❱ 41 │   inference_engine = load_model(model_config)                                             │
│   42 │   tokenizer = load_tokenizer(model_config)                                                │
│   43 │   inference_pipeline = MIIAsyncPipeline(inference_engine=inference_engine,                │
│   44 │   │   │   │   │   │   │   │   │   │     tokenizer=tokenizer,                              │
│                                                                                                  │
│ /home/ubuntu/.local/lib/python3.8/site-packages/mii/models.py:17 in load_model                   │
│                                                                                                  │
│   14 │   init_distributed(model_config)                                                          │
│   15 │   provider = model_config.provider                                                        │
│   16 │   if provider == ModelProvider.HUGGING_FACE:                                              │
│ ❱ 17 │   │   inference_engine = build_hf_engine(                                                 │
│   18 │   │   │   path=model_config.model_name_or_path,                                           │
│   19 │   │   │   engine_config=model_config.inference_engine_config)                             │
│   20 │   else:                                                                                   │
│                                                                                                  │
│ /home/ubuntu/.local/lib/python3.8/site-packages/deepspeed/inference/v2/engine_factory.py:27 in   │
│ build_hf_engine                                                                                  │
│                                                                                                  │
│   24 │   inference_logger(level=debug_level)                                                     │
│   25 │                                                                                           │
│   26 │   # get HF checkpoint engine                                                              │
│ ❱ 27 │   checkpoint_engine = HuggingFaceCheckpointEngine(path)                                   │
│   28 │                                                                                           │
│   29 │   # get model config from HF AutoConfig                                                   │
│   30 │   model_config = checkpoint_engine.model_config                                           │
│                                                                                                  │
│ /home/ubuntu/.local/lib/python3.8/site-packages/deepspeed/inference/v2/checkpoint/huggingface_en │
│ gine.py:23 in __init__                                                                           │
│                                                                                                  │
│    20 │   │                                                                                      │
│    21 │   │   self.model_name_or_path = model_name_or_path                                       │
│    22 │   │   self.auth_token = auth_token                                                       │
│ ❱  23 │   │   self.model_config = AutoConfig.from_pretrained(self.model_name_or_path)            │
│    24 │   │   self.generation_config = GenerationConfig.from_pretrained(self.model_name_or_pat   │
│    25 │   │   # Define this property here so we can use it in the model implementation           │
│    26 │   │   if not hasattr(self.model_config, "max_seq_length"):                               │
│                                                                                                  │
│ /home/ubuntu/.local/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py:9 │
│ 98 in from_pretrained                                                                            │
│                                                                                                  │
│    995 │   │   │   _ = kwargs.pop("code_revision", None)                                         │
│    996 │   │   │   return config_class.from_pretrained(pretrained_model_name_or_path, **kwargs)  │
│    997 │   │   elif "model_type" in config_dict:                                                 │
│ ❱  998 │   │   │   config_class = CONFIG_MAPPING[config_dict["model_type"]]                      │
│    999 │   │   │   return config_class.from_dict(config_dict, **unused_kwargs)                   │
│   1000 │   │   else:                                                                             │
│   1001 │   │   │   # Fallback: use pattern matching on the string.                               │
│                                                                                                  │
│ /home/ubuntu/.local/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py:7 │
│ 10 in __getitem__                                                                                │
│                                                                                                  │
│    707 │   │   if key in self._extra_content:                                                    │
│    708 │   │   │   return self._extra_content[key]                                               │
│    709 │   │   if key not in self._mapping:                                                      │
│ ❱  710 │   │   │   raise KeyError(key)                                                           │
│    711 │   │   value = self._mapping[key]                                                        │
│    712 │   │   module_name = model_type_to_module_name(key)                                      │
│    713 │   │   if module_name not in self._modules:                                              │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
KeyError: 'mistral'
[2023-11-06 10:46:35,290] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 10:46:35,290] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 10:46:36,430] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 3299
[2023-11-06 10:46:36,430] [ERROR] [launch.py:321:sigkill_handler] ['/usr/bin/python3', '-m', 'mii.launch.multi_gpu_server', '--deployment-name', 'mistralai/Mistral-7B-v0.1-mii-deployment', '--load-balancer-port', '50050', '--restful-gateway-port', '51080', '--server-port', '50051', '--zmq-port', '25555', '--model-config', 'eyJtb2RlbF9uYW1lX29yX3BhdGgiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItdjAuMSIsICJ0b2tlbml6ZXIiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItdjAuMSIsICJ0YXNrIjogInRleHQtZ2VuZXJhdGlvbiIsICJ0ZW5zb3JfcGFyYWxsZWwiOiAxLCAiaW5mZXJlbmNlX2VuZ2luZV9jb25maWciOiB7InRlbnNvcl9wYXJhbGxlbCI6IHsidHBfc2l6ZSI6IDF9LCAic3RhdGVfbWFuYWdlciI6IHsibWF4X3RyYWNrZWRfc2VxdWVuY2VzIjogMjA0OCwgIm1heF9yYWdnZWRfYmF0Y2hfc2l6ZSI6IDc2OCwgIm1heF9yYWdnZWRfc2VxdWVuY2VfY291bnQiOiA1MTIsICJtYXhfY29udGV4dCI6IDgxOTIsICJtZW1vcnlfY29uZmlnIjogeyJtb2RlIjogInJlc2VydmUiLCAic2l6ZSI6IDEwMDAwMDAwMDB9LCAib2ZmbG9hZCI6IGZhbHNlfX0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgInptcV9wb3J0X251bWJlciI6IDI1NTU1LCAicmVwbGljYV9udW0iOiAxLCAicmVwbGljYV9jb25maWdzIjogW3siaG9zdG5hbWUiOiAibG9jYWxob3N0IiwgInRlbnNvcl9wYXJhbGxlbF9wb3J0cyI6IFs1MDA1MV0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgImdwdV9pbmRpY2VzIjogWzBdLCAiem1xX3BvcnQiOiAyNTU1NX1dLCAibWF4X2xlbmd0aCI6IG51bGwsICJhbGxfcmFua19vdXRwdXQiOiBmYWxzZSwgInN5bmNfZGVidWciOiBmYWxzZSwgInByb2ZpbGVfbW9kZWxfdGltZSI6IGZhbHNlfQ=='] exits with return code = 1
[2023-11-06 10:46:40,295] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 10:46:40,295] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <module>:1                                                                                    │
│                                                                                                  │
│ /home/ubuntu/.local/lib/python3.8/site-packages/mii/server.py:61 in serve                        │
│                                                                                                  │
│    58 │   create_score_file(mii_config)                                                          │
│    59 │                                                                                          │
│    60 │   if mii_config.deployment_type == DeploymentType.LOCAL:                                 │
│ ❱  61 │   │   import_score_file(mii_config.deployment_name, DeploymentType.LOCAL).init()         │
│    62 │   │   return MIIClient(mii_config=mii_config)                                            │
│    63 │   if mii_config.deployment_type == DeploymentType.AML:                                   │
│    64 │   │   acr_name = mii.aml_related.utils.get_acr_name()                                    │
│                                                                                                  │
│ /tmp/mii_cache/mistralai/Mistral-7B-v0.1-mii-deployment/score.py:33 in init                      │
│                                                                                                  │
│   30 │   │   start_server = False                                                                │
│   31 │                                                                                           │
│   32 │   if start_server:                                                                        │
│ ❱ 33 │   │   mii.server.MIIServer(mii_config)                                                    │
│   34 │                                                                                           │
│   35 │   global model                                                                            │
│   36 │   model = None                                                                            │
│                                                                                                  │
│ /home/ubuntu/.local/lib/python3.8/site-packages/mii/server.py:106 in __init__                    │
│                                                                                                  │
│   103 │   │   mii_config.generate_replica_configs()                                              │
│   104 │   │                                                                                      │
│   105 │   │   processes = self._initialize_service(mii_config)                                   │
│ ❱ 106 │   │   self._wait_until_server_is_live(processes,                                         │
│   107 │   │   │   │   │   │   │   │   │   │   mii_config.model_config.replica_configs)           │
│   108 │                                                                                          │
│   109 │   def _wait_until_server_is_live(self,                                                   │
│                                                                                                  │
│ /home/ubuntu/.local/lib/python3.8/site-packages/mii/server.py:121 in _wait_until_server_is_live  │
│                                                                                                  │
│   118 │   │   │   │   │   for port in repl_config.tensor_parallel_ports)                         │
│   119 │   │   │   │   process_alive = self._is_server_process_alive(process)                     │
│   120 │   │   │   │   if not process_alive:                                                      │
│ ❱ 121 │   │   │   │   │   raise RuntimeError(                                                    │
│   122 │   │   │   │   │   │   "server crashed for some reason, unable to proceed")               │
│   123 │   │   │   │   time.sleep(4)                                                              │
│   124 │   │   │   │   logger.info("waiting for server to start...")                              │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: server crashed for some reason, unable to proceed

harryjulian avatar Nov 06 '23 10:11 harryjulian

Can you check your transformers version - you need transformers >= 4.34.0 or something to run mistral

idealover avatar Nov 06 '23 13:11 idealover

Updated my version of transformers to 4.35.0 -- receiving another error unfortunately.

[2023-11-06 15:40:59,506] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-11-06 15:41:00,889] [INFO] [server.py:97:__init__] Hostfile /job/hostfile not found, creating hostfile.
[2023-11-06 15:41:00,889] [INFO] [server.py:97:__init__] Hostfile /job/hostfile not found, creating hostfile.
[2023-11-06 15:41:00,898] [INFO] [server.py:166:_launch_server_process] msg_server launch: ['deepspeed', '-i', 'localhost:0', '--master_port', '29500', '--master_addr', 'localhost', '--no_ssh_check', '--no_local_rank', '--no_python', '/usr/bin/python3', '-m', 'mii.launch.multi_gpu_server', '--deployment-name', 'mistralai/Mistral-7B-Instruct-v0.1-mii-deployment', '--load-balancer-port', '50050', '--restful-gateway-port', '51080', '--server-port', '50051', '--zmq-port', '25555', '--model-config', 'eyJtb2RlbF9uYW1lX29yX3BhdGgiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItSW5zdHJ1Y3QtdjAuMSIsICJ0b2tlbml6ZXIiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItSW5zdHJ1Y3QtdjAuMSIsICJ0YXNrIjogInRleHQtZ2VuZXJhdGlvbiIsICJ0ZW5zb3JfcGFyYWxsZWwiOiAxLCAiaW5mZXJlbmNlX2VuZ2luZV9jb25maWciOiB7InRlbnNvcl9wYXJhbGxlbCI6IHsidHBfc2l6ZSI6IDF9LCAic3RhdGVfbWFuYWdlciI6IHsibWF4X3RyYWNrZWRfc2VxdWVuY2VzIjogMjA0OCwgIm1heF9yYWdnZWRfYmF0Y2hfc2l6ZSI6IDc2OCwgIm1heF9yYWdnZWRfc2VxdWVuY2VfY291bnQiOiA1MTIsICJtYXhfY29udGV4dCI6IDgxOTIsICJtZW1vcnlfY29uZmlnIjogeyJtb2RlIjogInJlc2VydmUiLCAic2l6ZSI6IDEwMDAwMDAwMDB9LCAib2ZmbG9hZCI6IGZhbHNlfX0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgInptcV9wb3J0X251bWJlciI6IDI1NTU1LCAicmVwbGljYV9udW0iOiAxLCAicmVwbGljYV9jb25maWdzIjogW3siaG9zdG5hbWUiOiAibG9jYWxob3N0IiwgInRlbnNvcl9wYXJhbGxlbF9wb3J0cyI6IFs1MDA1MV0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgImdwdV9pbmRpY2VzIjogWzBdLCAiem1xX3BvcnQiOiAyNTU1NX1dLCAibWF4X2xlbmd0aCI6IG51bGwsICJhbGxfcmFua19vdXRwdXQiOiBmYWxzZSwgInN5bmNfZGVidWciOiBmYWxzZSwgInByb2ZpbGVfbW9kZWxfdGltZSI6IGZhbHNlfQ==']
[2023-11-06 15:41:00,898] [INFO] [server.py:166:_launch_server_process] msg_server launch: ['deepspeed', '-i', 'localhost:0', '--master_port', '29500', '--master_addr', 'localhost', '--no_ssh_check', '--no_local_rank', '--no_python', '/usr/bin/python3', '-m', 'mii.launch.multi_gpu_server', '--deployment-name', 'mistralai/Mistral-7B-Instruct-v0.1-mii-deployment', '--load-balancer-port', '50050', '--restful-gateway-port', '51080', '--server-port', '50051', '--zmq-port', '25555', '--model-config', 'eyJtb2RlbF9uYW1lX29yX3BhdGgiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItSW5zdHJ1Y3QtdjAuMSIsICJ0b2tlbml6ZXIiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItSW5zdHJ1Y3QtdjAuMSIsICJ0YXNrIjogInRleHQtZ2VuZXJhdGlvbiIsICJ0ZW5zb3JfcGFyYWxsZWwiOiAxLCAiaW5mZXJlbmNlX2VuZ2luZV9jb25maWciOiB7InRlbnNvcl9wYXJhbGxlbCI6IHsidHBfc2l6ZSI6IDF9LCAic3RhdGVfbWFuYWdlciI6IHsibWF4X3RyYWNrZWRfc2VxdWVuY2VzIjogMjA0OCwgIm1heF9yYWdnZWRfYmF0Y2hfc2l6ZSI6IDc2OCwgIm1heF9yYWdnZWRfc2VxdWVuY2VfY291bnQiOiA1MTIsICJtYXhfY29udGV4dCI6IDgxOTIsICJtZW1vcnlfY29uZmlnIjogeyJtb2RlIjogInJlc2VydmUiLCAic2l6ZSI6IDEwMDAwMDAwMDB9LCAib2ZmbG9hZCI6IGZhbHNlfX0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgInptcV9wb3J0X251bWJlciI6IDI1NTU1LCAicmVwbGljYV9udW0iOiAxLCAicmVwbGljYV9jb25maWdzIjogW3siaG9zdG5hbWUiOiAibG9jYWxob3N0IiwgInRlbnNvcl9wYXJhbGxlbF9wb3J0cyI6IFs1MDA1MV0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgImdwdV9pbmRpY2VzIjogWzBdLCAiem1xX3BvcnQiOiAyNTU1NX1dLCAibWF4X2xlbmd0aCI6IG51bGwsICJhbGxfcmFua19vdXRwdXQiOiBmYWxzZSwgInN5bmNfZGVidWciOiBmYWxzZSwgInByb2ZpbGVfbW9kZWxfdGltZSI6IGZhbHNlfQ==']
[2023-11-06 15:41:00,908] [INFO] [server.py:166:_launch_server_process] msg_server launch: ['/usr/bin/python3', '-m', 'mii.launch.multi_gpu_server', '--deployment-name', 'mistralai/Mistral-7B-Instruct-v0.1-mii-deployment', '--load-balancer-port', '50050', '--restful-gateway-port', '51080', '--load-balancer', '--model-config', 'eyJtb2RlbF9uYW1lX29yX3BhdGgiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItSW5zdHJ1Y3QtdjAuMSIsICJ0b2tlbml6ZXIiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItSW5zdHJ1Y3QtdjAuMSIsICJ0YXNrIjogInRleHQtZ2VuZXJhdGlvbiIsICJ0ZW5zb3JfcGFyYWxsZWwiOiAxLCAiaW5mZXJlbmNlX2VuZ2luZV9jb25maWciOiB7InRlbnNvcl9wYXJhbGxlbCI6IHsidHBfc2l6ZSI6IDF9LCAic3RhdGVfbWFuYWdlciI6IHsibWF4X3RyYWNrZWRfc2VxdWVuY2VzIjogMjA0OCwgIm1heF9yYWdnZWRfYmF0Y2hfc2l6ZSI6IDc2OCwgIm1heF9yYWdnZWRfc2VxdWVuY2VfY291bnQiOiA1MTIsICJtYXhfY29udGV4dCI6IDgxOTIsICJtZW1vcnlfY29uZmlnIjogeyJtb2RlIjogInJlc2VydmUiLCAic2l6ZSI6IDEwMDAwMDAwMDB9LCAib2ZmbG9hZCI6IGZhbHNlfX0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgInptcV9wb3J0X251bWJlciI6IDI1NTU1LCAicmVwbGljYV9udW0iOiAxLCAicmVwbGljYV9jb25maWdzIjogW3siaG9zdG5hbWUiOiAibG9jYWxob3N0IiwgInRlbnNvcl9wYXJhbGxlbF9wb3J0cyI6IFs1MDA1MV0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgImdwdV9pbmRpY2VzIjogWzBdLCAiem1xX3BvcnQiOiAyNTU1NX1dLCAibWF4X2xlbmd0aCI6IG51bGwsICJhbGxfcmFua19vdXRwdXQiOiBmYWxzZSwgInN5bmNfZGVidWciOiBmYWxzZSwgInByb2ZpbGVfbW9kZWxfdGltZSI6IGZhbHNlfQ==']
[2023-11-06 15:41:00,908] [INFO] [server.py:166:_launch_server_process] msg_server launch: ['/usr/bin/python3', '-m', 'mii.launch.multi_gpu_server', '--deployment-name', 'mistralai/Mistral-7B-Instruct-v0.1-mii-deployment', '--load-balancer-port', '50050', '--restful-gateway-port', '51080', '--load-balancer', '--model-config', 'eyJtb2RlbF9uYW1lX29yX3BhdGgiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItSW5zdHJ1Y3QtdjAuMSIsICJ0b2tlbml6ZXIiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItSW5zdHJ1Y3QtdjAuMSIsICJ0YXNrIjogInRleHQtZ2VuZXJhdGlvbiIsICJ0ZW5zb3JfcGFyYWxsZWwiOiAxLCAiaW5mZXJlbmNlX2VuZ2luZV9jb25maWciOiB7InRlbnNvcl9wYXJhbGxlbCI6IHsidHBfc2l6ZSI6IDF9LCAic3RhdGVfbWFuYWdlciI6IHsibWF4X3RyYWNrZWRfc2VxdWVuY2VzIjogMjA0OCwgIm1heF9yYWdnZWRfYmF0Y2hfc2l6ZSI6IDc2OCwgIm1heF9yYWdnZWRfc2VxdWVuY2VfY291bnQiOiA1MTIsICJtYXhfY29udGV4dCI6IDgxOTIsICJtZW1vcnlfY29uZmlnIjogeyJtb2RlIjogInJlc2VydmUiLCAic2l6ZSI6IDEwMDAwMDAwMDB9LCAib2ZmbG9hZCI6IGZhbHNlfX0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgInptcV9wb3J0X251bWJlciI6IDI1NTU1LCAicmVwbGljYV9udW0iOiAxLCAicmVwbGljYV9jb25maWdzIjogW3siaG9zdG5hbWUiOiAibG9jYWxob3N0IiwgInRlbnNvcl9wYXJhbGxlbF9wb3J0cyI6IFs1MDA1MV0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgImdwdV9pbmRpY2VzIjogWzBdLCAiem1xX3BvcnQiOiAyNTU1NX1dLCAibWF4X2xlbmd0aCI6IG51bGwsICJhbGxfcmFua19vdXRwdXQiOiBmYWxzZSwgInN5bmNfZGVidWciOiBmYWxzZSwgInByb2ZpbGVfbW9kZWxfdGltZSI6IGZhbHNlfQ==']
[2023-11-06 15:41:02,950] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-11-06 15:41:02,950] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-11-06 15:41:03,515] [WARNING] [runner.py:203:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2023-11-06 15:41:03,520] [INFO] [runner.py:570:main] cmd = /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMF19 --master_addr=127.0.0.1 --master_port=29500 --no_python --no_local_rank --enable_each_rank_log=None /usr/bin/python3 -m mii.launch.multi_gpu_server --deployment-name mistralai/Mistral-7B-Instruct-v0.1-mii-deployment --load-balancer-port 50050 --restful-gateway-port 51080 --server-port 50051 --zmq-port 25555 --model-config eyJtb2RlbF9uYW1lX29yX3BhdGgiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItSW5zdHJ1Y3QtdjAuMSIsICJ0b2tlbml6ZXIiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItSW5zdHJ1Y3QtdjAuMSIsICJ0YXNrIjogInRleHQtZ2VuZXJhdGlvbiIsICJ0ZW5zb3JfcGFyYWxsZWwiOiAxLCAiaW5mZXJlbmNlX2VuZ2luZV9jb25maWciOiB7InRlbnNvcl9wYXJhbGxlbCI6IHsidHBfc2l6ZSI6IDF9LCAic3RhdGVfbWFuYWdlciI6IHsibWF4X3RyYWNrZWRfc2VxdWVuY2VzIjogMjA0OCwgIm1heF9yYWdnZWRfYmF0Y2hfc2l6ZSI6IDc2OCwgIm1heF9yYWdnZWRfc2VxdWVuY2VfY291bnQiOiA1MTIsICJtYXhfY29udGV4dCI6IDgxOTIsICJtZW1vcnlfY29uZmlnIjogeyJtb2RlIjogInJlc2VydmUiLCAic2l6ZSI6IDEwMDAwMDAwMDB9LCAib2ZmbG9hZCI6IGZhbHNlfX0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgInptcV9wb3J0X251bWJlciI6IDI1NTU1LCAicmVwbGljYV9udW0iOiAxLCAicmVwbGljYV9jb25maWdzIjogW3siaG9zdG5hbWUiOiAibG9jYWxob3N0IiwgInRlbnNvcl9wYXJhbGxlbF9wb3J0cyI6IFs1MDA1MV0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgImdwdV9pbmRpY2VzIjogWzBdLCAiem1xX3BvcnQiOiAyNTU1NX1dLCAibWF4X2xlbmd0aCI6IG51bGwsICJhbGxfcmFua19vdXRwdXQiOiBmYWxzZSwgInN5bmNfZGVidWciOiBmYWxzZSwgInByb2ZpbGVfbW9kZWxfdGltZSI6IGZhbHNlfQ==
Starting load balancer on port: 50050
About to start server
Started
[2023-11-06 15:41:05,446] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-11-06 15:41:05,921] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:05,921] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:05,987] [INFO] [launch.py:145:main] WORLD INFO DICT: {'localhost': [0]}
[2023-11-06 15:41:05,987] [INFO] [launch.py:151:main] nnodes=1, num_local_procs=1, node_rank=0
[2023-11-06 15:41:05,987] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0]})
[2023-11-06 15:41:05,988] [INFO] [launch.py:163:main] dist_world_size=1
[2023-11-06 15:41:05,988] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0
[2023-11-06 15:41:07,914] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-11-06 15:41:08,574] [INFO] [comm.py:637:init_distributed] cdb=None
[2023-11-06 15:41:08,574] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
Fetching 8 files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<00:00, 133683.00it/s]
[2023-11-06 15:41:08,960] [INFO] [engine_v2.py:64:__init__] Building model...
Using /home/ubuntu/.cache/torch_extensions/py38_cu121 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/ubuntu/.cache/torch_extensions/py38_cu121/inference_core_ops/build.ninja...
Building extension module inference_core_ops...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module inference_core_ops...
Time to load inference_core_ops op: 0.1107027530670166 seconds
Using /home/ubuntu/.cache/torch_extensions/py38_cu121 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/ubuntu/.cache/torch_extensions/py38_cu121/ragged_device_ops/build.ninja...
Building extension module ragged_device_ops...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module ragged_device_ops...
Time to load ragged_device_ops op: 0.11578202247619629 seconds
Using /home/ubuntu/.cache/torch_extensions/py38_cu121 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/ubuntu/.cache/torch_extensions/py38_cu121/ragged_ops/build.ninja...
Building extension module ragged_ops...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module ragged_ops...
Time to load ragged_ops op: 0.10139870643615723 seconds
[2023-11-06 15:41:09,962] [INFO] [huggingface_engine.py:86:parameters] Loading checkpoint: /home/ubuntu/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.1/snapshots/7ad5799710574ba1c1d953eba3077af582f3a773/pytorch_model-00002-of-00002.bin
[2023-11-06 15:41:10,922] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:10,922] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:15,927] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:15,927] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:20,930] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:20,930] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:25,934] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:25,934] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:30,939] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:30,939] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:35,943] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:35,943] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:40,946] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:40,946] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:45,950] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:45,950] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:48,605] [INFO] [huggingface_engine.py:86:parameters] Loading checkpoint: /home/ubuntu/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.1/snapshots/7ad5799710574ba1c1d953eba3077af582f3a773/pytorch_model-00001-of-00002.bin
[2023-11-06 15:41:50,954] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:50,954] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:55,958] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:41:55,958] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:00,962] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:00,962] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:05,966] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:05,966] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:10,971] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:10,971] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:15,975] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:15,975] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:20,980] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:20,980] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:25,984] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:25,984] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:30,989] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:30,989] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:35,993] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:35,993] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:40,998] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:40,998] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:46,002] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:46,002] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:51,006] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:51,006] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:56,652] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:56,652] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:42:57,684] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 3181
[2023-11-06 15:42:57,686] [ERROR] [launch.py:321:sigkill_handler] ['/usr/bin/python3', '-m', 'mii.launch.multi_gpu_server', '--deployment-name', 'mistralai/Mistral-7B-Instruct-v0.1-mii-deployment', '--load-balancer-port', '50050', '--restful-gateway-port', '51080', '--server-port', '50051', '--zmq-port', '25555', '--model-config', 'eyJtb2RlbF9uYW1lX29yX3BhdGgiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItSW5zdHJ1Y3QtdjAuMSIsICJ0b2tlbml6ZXIiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItSW5zdHJ1Y3QtdjAuMSIsICJ0YXNrIjogInRleHQtZ2VuZXJhdGlvbiIsICJ0ZW5zb3JfcGFyYWxsZWwiOiAxLCAiaW5mZXJlbmNlX2VuZ2luZV9jb25maWciOiB7InRlbnNvcl9wYXJhbGxlbCI6IHsidHBfc2l6ZSI6IDF9LCAic3RhdGVfbWFuYWdlciI6IHsibWF4X3RyYWNrZWRfc2VxdWVuY2VzIjogMjA0OCwgIm1heF9yYWdnZWRfYmF0Y2hfc2l6ZSI6IDc2OCwgIm1heF9yYWdnZWRfc2VxdWVuY2VfY291bnQiOiA1MTIsICJtYXhfY29udGV4dCI6IDgxOTIsICJtZW1vcnlfY29uZmlnIjogeyJtb2RlIjogInJlc2VydmUiLCAic2l6ZSI6IDEwMDAwMDAwMDB9LCAib2ZmbG9hZCI6IGZhbHNlfX0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgInptcV9wb3J0X251bWJlciI6IDI1NTU1LCAicmVwbGljYV9udW0iOiAxLCAicmVwbGljYV9jb25maWdzIjogW3siaG9zdG5hbWUiOiAibG9jYWxob3N0IiwgInRlbnNvcl9wYXJhbGxlbF9wb3J0cyI6IFs1MDA1MV0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgImdwdV9pbmRpY2VzIjogWzBdLCAiem1xX3BvcnQiOiAyNTU1NX1dLCAibWF4X2xlbmd0aCI6IG51bGwsICJhbGxfcmFua19vdXRwdXQiOiBmYWxzZSwgInN5bmNfZGVidWciOiBmYWxzZSwgInByb2ZpbGVfbW9kZWxfdGltZSI6IGZhbHNlfQ=='] exits with return code = -9
[2023-11-06 15:43:01,725] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
[2023-11-06 15:43:01,725] [INFO] [server.py:124:_wait_until_server_is_live] waiting for server to start...
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /usr/lib/python3.8/runpy.py:194 in _run_module_as_main                                           │
│                                                                                                  │
│   191 │   main_globals = sys.modules["__main__"].__dict__                                        │
│   192 │   if alter_argv:                                                                         │
│   193 │   │   sys.argv[0] = mod_spec.origin                                                      │
│ ❱ 194 │   return _run_code(code, main_globals, None,                                             │
│   195 │   │   │   │   │    "__main__", mod_spec)                                                 │
│   196                                                                                            │
│   197 def run_module(mod_name, init_globals=None,                                                │
│                                                                                                  │
│ /usr/lib/python3.8/runpy.py:87 in _run_code                                                      │
│                                                                                                  │
│    84 │   │   │   │   │      __loader__ = loader,                                                │
│    85 │   │   │   │   │      __package__ = pkg_name,                                             │
│    86 │   │   │   │   │      __spec__ = mod_spec)                                                │
│ ❱  87 │   exec(code, run_globals)                                                                │
│    88 │   return run_globals                                                                     │
│    89                                                                                            │
│    90 def _run_module_code(code, init_globals=None,                                              │
│                                                                                                  │
│ /home/ubuntu/deepspeed/loadtesting/launch.py:2 in <module>                                       │
│                                                                                                  │
│   1 import mii                                                                                   │
│ ❱ 2 client = mii.serve("mistralai/Mistral-7B-Instruct-v0.1")                                     │
│   3 response = client.generate("Deepspeed is", max_new_tokens=128)                               │
│   4 print(response.response)                                                                     │
│   5                                                                                              │
│                                                                                                  │
│ /home/ubuntu/.local/lib/python3.8/site-packages/mii/server.py:61 in serve                        │
│                                                                                                  │
│    58 │   create_score_file(mii_config)                                                          │
│    59 │                                                                                          │
│    60 │   if mii_config.deployment_type == DeploymentType.LOCAL:                                 │
│ ❱  61 │   │   import_score_file(mii_config.deployment_name, DeploymentType.LOCAL).init()         │
│    62 │   │   return MIIClient(mii_config=mii_config)                                            │
│    63 │   if mii_config.deployment_type == DeploymentType.AML:                                   │
│    64 │   │   acr_name = mii.aml_related.utils.get_acr_name()                                    │
│                                                                                                  │
│ /tmp/mii_cache/mistralai/Mistral-7B-Instruct-v0.1-mii-deployment/score.py:33 in init             │
│                                                                                                  │
│   30 │   │   start_server = False                                                                │
│   31 │                                                                                           │
│   32 │   if start_server:                                                                        │
│ ❱ 33 │   │   mii.server.MIIServer(mii_config)                                                    │
│   34 │                                                                                           │
│   35 │   global model                                                                            │
│   36 │   model = None                                                                            │
│                                                                                                  │
│ /home/ubuntu/.local/lib/python3.8/site-packages/mii/server.py:106 in __init__                    │
│                                                                                                  │
│   103 │   │   mii_config.generate_replica_configs()                                              │
│   104 │   │                                                                                      │
│   105 │   │   processes = self._initialize_service(mii_config)                                   │
│ ❱ 106 │   │   self._wait_until_server_is_live(processes,                                         │
│   107 │   │   │   │   │   │   │   │   │   │   mii_config.model_config.replica_configs)           │
│   108 │                                                                                          │
│   109 │   def _wait_until_server_is_live(self,                                                   │
│                                                                                                  │
│ /home/ubuntu/.local/lib/python3.8/site-packages/mii/server.py:121 in _wait_until_server_is_live  │
│                                                                                                  │
│   118 │   │   │   │   │   for port in repl_config.tensor_parallel_ports)                         │
│   119 │   │   │   │   process_alive = self._is_server_process_alive(process)                     │
│   120 │   │   │   │   if not process_alive:                                                      │
│ ❱ 121 │   │   │   │   │   raise RuntimeError(                                                    │
│   122 │   │   │   │   │   │   "server crashed for some reason, unable to proceed")               │
│   123 │   │   │   │   time.sleep(4)                                                              │
│   124 │   │   │   │   logger.info("waiting for server to start...")                              │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: server crashed for some reason, unable to proceed

harryjulian avatar Nov 06 '23 15:11 harryjulian

Can you tell me your CUDA driver version etc?

idealover avatar Nov 06 '23 16:11 idealover

@harryjulian It looks like the process that holds the model is failing:

[2023-11-06 15:42:57,686] [ERROR] [launch.py:321:sigkill_handler] ['/usr/bin/python3', '-m', 'mii.launch.multi_gpu_server', ...

Sometimes it can be hard to get a proper error and stacktrace when using GRPC. Could you try loading the model in the same environment but using the mii.pipeline instead? Please share the error you see there!

import mii
pipe = mii.pipeline("mistralai/Mistral-7B-v0.1")

mrwyattii avatar Nov 06 '23 19:11 mrwyattii

@idealover Driver details below.

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:16:06_PDT_2023
Cuda compilation tools, release 12.1, V12.1.105
Build cuda_12.1.r12.1/compiler.32688072_0

@mrwyattii Tried loading the model using the pipeline, it works fine with no errors.

harryjulian avatar Nov 07 '23 11:11 harryjulian

@harryjulian are you launching the pipeline example with deepspeed or python?

mrwyattii avatar Nov 07 '23 22:11 mrwyattii

@mrwyattii I was initially using python but I've also tried deepspeed --num_gpus 1 to no avail.

harryjulian avatar Nov 08 '23 09:11 harryjulian

Could you try one more thing for me @harryjulian. From the output log you shared, it looks like the actual inference model is failing. Can you run the following? It will attempt to create just the inference model process without any other MII process:

/usr/bin/python3 -m mii.launch.multi_gpu_server --deployment-name mistralai/Mistral-7B-Instruct-v0.1-mii-deployment --load-balancer-port 50050 --restful-gateway-port 51080 --server-port 50051 --zmq-port 25555 --model-config "eyJtb2RlbF9uYW1lX29yX3BhdGgiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItSW5zdHJ1Y3QtdjAuMSIsICJ0b2tlbml6ZXIiOiAibWlzdHJhbGFpL01pc3RyYWwtN0ItSW5zdHJ1Y3QtdjAuMSIsICJ0YXNrIjogInRleHQtZ2VuZXJhdGlvbiIsICJ0ZW5zb3JfcGFyYWxsZWwiOiAxLCAiaW5mZXJlbmNlX2VuZ2luZV9jb25maWciOiB7InRlbnNvcl9wYXJhbGxlbCI6IHsidHBfc2l6ZSI6IDF9LCAic3RhdGVfbWFuYWdlciI6IHsibWF4X3RyYWNrZWRfc2VxdWVuY2VzIjogMjA0OCwgIm1heF9yYWdnZWRfYmF0Y2hfc2l6ZSI6IDc2OCwgIm1heF9yYWdnZWRfc2VxdWVuY2VfY291bnQiOiA1MTIsICJtYXhfY29udGV4dCI6IDgxOTIsICJtZW1vcnlfY29uZmlnIjogeyJtb2RlIjogInJlc2VydmUiLCAic2l6ZSI6IDEwMDAwMDAwMDB9LCAib2ZmbG9hZCI6IGZhbHNlfX0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgInptcV9wb3J0X251bWJlciI6IDI1NTU1LCAicmVwbGljYV9udW0iOiAxLCAicmVwbGljYV9jb25maWdzIjogW3siaG9zdG5hbWUiOiAibG9jYWxob3N0IiwgInRlbnNvcl9wYXJhbGxlbF9wb3J0cyI6IFs1MDA1MV0sICJ0b3JjaF9kaXN0X3BvcnQiOiAyOTUwMCwgImdwdV9pbmRpY2VzIjogWzBdLCAiem1xX3BvcnQiOiAyNTU1NX1dLCAibWF4X2xlbmd0aCI6IG51bGwsICJhbGxfcmFua19vdXRwdXQiOiBmYWxzZSwgInN5bmNfZGVidWciOiBmYWxzZSwgInByb2ZpbGVfbW9kZWxfdGltZSI6IGZhbHNlfQ=="

mrwyattii avatar Nov 09 '23 00:11 mrwyattii