Chinese-CLIP icon indicating copy to clipboard operation
Chinese-CLIP copied to clipboard

模型微调报错

Open shenyan-008 opened this issue 5 months ago • 0 comments

root@autodl-container-69e74d8599-35a096fb:~/Chinese-CLIP-master# bash run_scripts/muge_finetune_vit-b-16_rbt-base.sh datapath /root/miniconda3/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use-env is set by default in torchrun. If your script expects --local-rank argument to be set, please change it to read from os.environ['LOCAL_RANK'] instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions

main() usage: launch.py [-h] [--nnodes NNODES] [--nproc-per-node NPROC_PER_NODE] [--rdzv-backend RDZV_BACKEND] [--rdzv-endpoint RDZV_ENDPOINT] [--rdzv-id RDZV_ID] [--rdzv-conf RDZV_CONF] [--standalone] [--max-restarts MAX_RESTARTS] [--monitor-interval MONITOR_INTERVAL] [--start-method {spawn,fork,forkserver}] [--event-log-handler EVENT_LOG_HANDLER] [--role ROLE] [-m] [--no-python] [--run-path] [--log-dir LOG_DIR] [-r REDIRECTS] [-t TEE] [--local-ranks-filter LOCAL_RANKS_FILTER] [--node-rank NODE_RANK] [--master-addr MASTER_ADDR] [--master-port MASTER_PORT] [--local-addr LOCAL_ADDR] [--logs-specs LOGS_SPECS] [--use-env] training_script ... launch.py: error: ambiguous option: --logs=datapath/experiments/ could match --logs-specs, --logs_specs

为什么会这样?官方有没有维护? @https://github.com/OFA-Sys/Chinese-CLIP?tab=readme-ov-file#%E8%B7%A8%E6%A8%A1%E6%80%81%E6%A3%80%E7%B4%A2

root@autodl-container-69e74d8599-35a096fb:~/Chinese-CLIP-master# pip show torch torchvision torchaudio WARNING: Package(s) not found: torchaudio Name: torch Version: 2.8.0+cu128 Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration Home-page: https://pytorch.org/ Author: PyTorch Team Author-email: [email protected] License: BSD-3-Clause Location: /root/miniconda3/lib/python3.12/site-packages Requires: filelock, fsspec, jinja2, networkx, nvidia-cublas-cu12, nvidia-cuda-cupti-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-runtime-cu12, nvidia-cudnn-cu12, nvidia-cufft-cu12, nvidia-cufile-cu12, nvidia-curand-cu12, nvidia-cusolver-cu12, nvidia-cusparse-cu12, nvidia-cusparselt-cu12, nvidia-nccl-cu12, nvidia-nvjitlink-cu12, nvidia-nvtx-cu12, setuptools, sympy, triton, typing-extensions Required-by: cn-clip, timm, torchvision

Name: torchvision Version: 0.23.0+cu128 Summary: image and video datasets and models for torch deep learning Home-page: https://github.com/pytorch/vision Author: PyTorch Core Team Author-email: [email protected] License: BSD Location: /root/miniconda3/lib/python3.12/site-packages Requires: numpy, pillow, torch Required-by: cn-clip, timm root@autodl-container-69e74d8599-35a096fb:~/Chinese-CLIP-master# nvidia-smi Thu Sep 11 13:49:41 2025
+-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 570.124.04 Driver Version: 570.124.04 CUDA Version: 12.8 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3090 On | 00000000:01:00.0 Off | N/A | | 30% 31C P8 39W / 350W | 1MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+

shenyan-008 avatar Sep 11 '25 05:09 shenyan-008