InternLM-XComposer
InternLM-XComposer copied to clipboard
Multi-GPU inference
Hello! I got this error when running examples/example_chat.py and share-cap_batch_infer.py with multiple gpus. Does anyone know how to solve it ?
When I set os.environ['CUDA_LAUNCH_BLOCKING'] = '1', it reveals more details.