ERROR: Cannot install -r requirements.txt (line 11), detectron2 and spacy because these package versions have conflicting dependencies.
the problem keep showing
The same issue
More information is as follows: openmim 0.3.7 depends on Click nltk 3.8.1 depends on click typer 0.3.0 depends on click<7.2.0 and >=7.1.1 black 23.3.0 depends on click>=8.0.0
Which Python version is you use?
We use Python 3.8 and spacy==3.5.1
The same issue
More information is as follows: openmim 0.3..7 depends on Click nltk 3.8.1 depends on click typer 0.3..0 depends on click<7.2.0 and >=7.1.1 black 23.3.0 depends on click>=8.0.0
In ChatVideo, typer is 0.7.0
3.10.6
---Original--- From: "Yinan @.> Date: Fri, Apr 21, 2023 14:44 PM To: @.>; Cc: @.@.>; Subject: Re: [OpenGVLab/Ask-Anything] ERROR: Cannot install -rrequirements.txt (line 11), detectron2 and spacy because these packageversions have conflicting dependencies. (Issue #4)
Which Python version is you use?
We use Python 3.8 and spacy==3.5.1
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
ubuntu 22.04 LTS (newly installed) python 3.10.6 I modified the requirements.txt spacy==3.0.9 -> spacy ==3.5.1
and new issue en_core_web_sm 3.0.0 depends 3.0.0=<spacy < 3.1.0
ubuntu 22.04 LTS (newly installed) python 3.10.6 I modified the requirements.txt spacy==3.0.9 -> spacy ==3.5.1
and new issue en_core_web_sm 3.0.0 depends 3.0.0=<spacy < 3.1.0
I upload my conda environment in https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat/environment.yaml @Pseudoking @jackylee1
(base) PS D:\Ask-Anything\video_chat_with_StableLM> conda env create -f environment.yaml Collecting package metadata (repodata.json): done Solving environment: failed
ResolvePackageNotFound:
- numpy==1.23.5=py38h14f4228_0
- zstd==1.5.5=hc292b87_0
- idna==3.4=py38h06a4308_0
- jupyter_client==8.1.0=py38h06a4308_0
- ca-certificates==2023.01.10=h06a4308_0
- ipython==8.12.0=py38h06a4308_0
- pyzmq==23.2.0=py38h6a678d5_0
- libtasn1==4.19.0=h5eee18b_0
- psutil==5.9.0=py38h5eee18b_0
- certifi==2022.12.7=py38h06a4308_0
- libgomp==11.2.0=h1234567_1
- ffmpeg==4.3=hf484d3e_0
- giflib==5.2.1=h5eee18b_3
- pysocks==1.7.1=py38h06a4308_0
- lcms2==2.12=h3be6417_0
- python==3.8.16=h7a1cb2a_3
- wheel==0.38.4=py38h06a4308_0
- gnutls==3.6.15=he1e5248_0
- libunistring==0.9.10=h27cfd23_0
- flit-core==3.8.0=py38h06a4308_0
- comm==0.1.2=py38h06a4308_0
- jupyter_core==5.3.0=py38h06a4308_0
- pyopenssl==23.0.0=py38h06a4308_0
- libgfortran5==11.2.0=h1234567_1
- libstdcxx-ng==11.2.0=h1234567_1
- libtiff==4.5.0=h6a678d5_2
- cryptography==39.0.1=py38h9ce1e76_0
- lame==3.100=h7b6447c_0
- gmp==6.2.1=h295c915_3
- tornado==6.2=py38h5eee18b_0
- cffi==1.15.1=py38h5eee18b_3
- matplotlib-inline==0.1.6=py38h06a4308_0
- mkl_random==1.2.2=py38h51133e4_0
- _openmp_mutex==5.1=1_gnu
- pip==23.0.1=py38h06a4308_0
- jedi==0.18.1=py38h06a4308_1
- nettle==3.7.3=hbbd107a_1
- zlib==1.2.13=h5eee18b_0
- tk==8.6.12=h1ccaba5_0
- openssl==1.1.1t=h7f8727e_0
- packaging==23.0=py38h06a4308_0
- libgfortran-ng==11.2.0=h00389a5_1
- libgcc-ng==11.2.0=h1234567_1
- libffi==3.4.2=h6a678d5_6
- typing_extensions==4.4.0=py38h06a4308_0
- mkl==2021.4.0=h06a4308_640
- libdeflate==1.17=h5eee18b_0
- nest-asyncio==1.5.6=py38h06a4308_0
- scipy==1.10.1=py38h14f4228_0
- requests==2.28.1=py38h06a4308_1
- pytorch-cuda==11.7=h778d358_3
- pillow==9.4.0=py38h6a678d5_0
- pytorch==1.13.1=py3.8_cuda11.7_cudnn8.5.0_0
- libpng==1.6.39=h5eee18b_0
- traitlets==5.7.1=py38h06a4308_0
- libiconv==1.16=h7f8727e_2
- numpy-base==1.23.5=py38h31eccc5_0
- sqlite==3.41.2=h5eee18b_0
- zeromq==4.3.4=h2531618_0
- xz==5.2.10=h5eee18b_1
- libwebp-base==1.2.4=h5eee18b_1
- libcufile==1.6.0.25=0
- debugpy==1.5.1=py38h295c915_0
- jpeg==9e=h5eee18b_1
- lerc==3.0=h295c915_0
- mkl_fft==1.3.1=py38hd3c417c_0
- prompt-toolkit==3.0.36=py38h06a4308_0
- libidn2==2.3.2=h7f8727e_0
- mkl-service==2.4.0=py38h7f8727e_0
- platformdirs==2.5.2=py38h06a4308_0
- ld_impl_linux-64==2.38=h1181459_1
- libcufft==10.7.2.124=h4fbf590_0
- lz4-c==1.9.4=h6a678d5_0
- readline==8.2=h5eee18b_0
- openh264==2.1.1=h4ff587b_0
- libwebp==1.2.4=h11a3e52_1
- intel-openmp==2021.4.0=h06a4308_3561
- brotlipy==0.7.0=py38h27cfd23_1003
- freetype==2.12.1=h4a9f257_0
- urllib3==1.26.15=py38h06a4308_0
- bzip2==1.0.8=h7b6447c_0
- ipykernel==6.19.2=py38hb070fc8_0
- libsodium==1.0.18=h7b6447c_0
- ncurses==6.4=h6a678d5_0
ERROR: Ignored the following versions that require a different python version: 0.0.100 Requires-Python >=3.8.1,<4.0; 0.0.101 Requires-Python >=3.8.1,<4.0; 0.0.101rc0 Requires-Python >=3.8.1,<4.0; 0.0.102 Requires-Python >=3.8.1,<4.0; 0.0.102rc0 Requires-Python >=3.8.1,<4.0; 0.0.103 Requires-Python >=3.8.1,<4.0; 0.0.104 Requires-Python >=3.8.1,<4.0; 0.0.105 Requires-Python >=3.8.1,<4.0; 0.0.106 Requires-Python >=3.8.1,<4.0; 0.0.107 Requires-Python >=3.8.1,<4.0; 0.0.108 Requires-Python >=3.8.1,<4.0; 0.0.109 Requires-Python >=3.8.1,<4.0; 0.0.110 Requires-Python >=3.8.1,<4.0; 0.0.111 Requires-Python >=3.8.1,<4.0; 0.0.112 Requires-Python >=3.8.1,<4.0; 0.0.113 Requires-Python >=3.8.1,<4.0; 0.0.114 Requires-Python >=3.8.1,<4.0; 0.0.115 Requires-Python >=3.8.1,<4.0; 0.0.116 Requires-Python >=3.8.1,<4.0; 0.0.117 Requires-Python >=3.8.1,<4.0; 0.0.118 Requires-Python >=3.8.1,<4.0; 0.0.119 Requires-Python >=3.8.1,<4.0; 0.0.120 Requires-Python >=3.8.1,<4.0; 0.0.121 Requires-Python >=3.8.1,<4.0; 0.0.122 Requires-Python >=3.8.1,<4.0; 0.0.123 Requires-Python >=3.8.1,<4.0; 0.0.124 Requires-Python >=3.8.1,<4.0; 0.0.125 Requires-Python >=3.8.1,<4.0; 0.0.126 Requires-Python >=3.8.1,<4.0; 0.0.127 Requires-Python >=3.8.1,<4.0; 0.0.128 Requires-Python >=3.8.1,<4.0; 0.0.129 Requires-Python >=3.8.1,<4.0; 0.0.130 Requires-Python >=3.8.1,<4.0; 0.0.131 Requires-Python >=3.8.1,<4.0; 0.0.132 Requires-Python >=3.8.1,<4.0; 0.0.133 Requires-Python >=3.8.1,<4.0; 0.0.134 Requires-Python >=3.8.1,<4.0; 0.0.135 Requires-Python >=3.8.1,<4.0; 0.0.136 Requires-Python >=3.8.1,<4.0; 0.0.137 Requires-Python >=3.8.1,<4.0; 0.0.138 Requires-Python >=3.8.1,<4.0; 0.0.139 Requires-Python >=3.8.1,<4.0; 0.0.140 Requires-Python >=3.8.1,<4.0; 0.0.141 Requires-Python >=3.8.1,<4.0; 0.0.142 Requires-Python >=3.8.1,<4.0; 0.0.143 Requires-Python >=3.8.1,<4.0; 0.0.144 Requires-Python >=3.8.1,<4.0; 0.0.145 Requires-Python >=3.8.1,<4.0; 0.0.146 Requires-Python >=3.8.1,<4.0; 0.0.28 Requires-Python >=3.8.1,<4.0; 0.0.29 Requires-Python >=3.8.1,<4.0; 0.0.30 Requires-Python >=3.8.1,<4.0; 0.0.31 Requires-Python >=3.8.1,<4.0; 0.0.32 Requires-Python >=3.8.1,<4.0; 0.0.33 Requires-Python >=3.8.1,<4.0; 0.0.34 Requires-Python >=3.8.1,<4.0; 0.0.35 Requires-Python >=3.8.1,<4.0; 0.0.36 Requires-Python >=3.8.1,<4.0; 0.0.37 Requires-Python >=3.8.1,<4.0; 0.0.38 Requires-Python >=3.8.1,<4.0; 0.0.39 Requires-Python >=3.8.1,<4.0; 0.0.40 Requires-Python >=3.8.1,<4.0; 0.0.41 Requires-Python >=3.8.1,<4.0; 0.0.42 Requires-Python >=3.8.1,<4.0; 0.0.43 Requires-Python >=3.8.1,<4.0; 0.0.44 Requires-Python >=3.8.1,<4.0; 0.0.45 Requires-Python >=3.8.1,<4.0; 0.0.46 Requires-Python >=3.8.1,<4.0; 0.0.47 Requires-Python >=3.8.1,<4.0; 0.0.48 Requires-Python >=3.8.1,<4.0; 0.0.49 Requires-Python >=3.8.1,<4.0; 0.0.50 Requires-Python >=3.8.1,<4.0; 0.0.51 Requires-Python >=3.8.1,<4.0; 0.0.52 Requires-Python >=3.8.1,<4.0; 0.0.53 Requires-Python >=3.8.1,<4.0; 0.0.54 Requires-Python >=3.8.1,<4.0; 0.0.55 Requires-Python >=3.8.1,<4.0; 0.0.56 Requires-Python >=3.8.1,<4.0; 0.0.57 Requires-Python >=3.8.1,<4.0; 0.0.58 Requires-Python >=3.8.1,<4.0; 0.0.59 Requires-Python >=3.8.1,<4.0; 0.0.60 Requires-Python >=3.8.1,<4.0; 0.0.61 Requires-Python >=3.8.1,<4.0; 0.0.63 Requires-Python >=3.8.1,<4.0; 0.0.64 Requires-Python >=3.8.1,<4.0; 0.0.65 Requires-Python >=3.8.1,<4.0; 0.0.66 Requires-Python >=3.8.1,<4.0; 0.0.67 Requires-Python >=3.8.1,<4.0; 0.0.68 Requires-Python >=3.8.1,<4.0; 0.0.69 Requires-Python >=3.8.1,<4.0; 0.0.70 Requires-Python >=3.8.1,<4.0; 0.0.71 Requires-Python >=3.8.1,<4.0; 0.0.72 Requires-Python >=3.8.1,<4.0; 0.0.73 Requires-Python >=3.8.1,<4.0; 0.0.74 Requires-Python >=3.8.1,<4.0; 0.0.75 Requires-Python >=3.8.1,<4.0; 0.0.76 Requires-Python >=3.8.1,<4.0; 0.0.77 Requires-Python >=3.8.1,<4.0; 0.0.78 Requires-Python >=3.8.1,<4.0; 0.0.79 Requires-Python >=3.8.1,<4.0; 0.0.80 Requires-Python >=3.8.1,<4.0; 0.0.81 Requires-Python >=3.8.1,<4.0; 0.0.82 Requires-Python >=3.8.1,<4.0; 0.0.83 Requires-Python >=3.8.1,<4.0; 0.0.84 Requires-Python >=3.8.1,<4.0; 0.0.85 Requires-Python >=3.8.1,<4.0; 0.0.86 Requires-Python >=3.8.1,<4.0; 0.0.87 Requires-Python >=3.8.1,<4.0; 0.0.88 Requires-Python >=3.8.1,<4.0; 0.0.89 Requires-Python >=3.8.1,<4.0; 0.0.90 Requires-Python >=3.8.1,<4.0; 0.0.91 Requires-Python >=3.8.1,<4.0; 0.0.92 Requires-Python >=3.8.1,<4.0; 0.0.93 Requires-Python >=3.8.1,<4.0; 0.0.94 Requires-Python >=3.8.1,<4.0; 0.0.95 Requires-Python >=3.8.1,<4.0; 0.0.96 Requires-Python >=3.8.1,<4.0; 0.0.97 Requires-Python >=3.8.1,<4.0; 0.0.98 Requires-Python >=3.8.1,<4.0; 0.0.99 Requires-Python >=3.8.1,<4.0; 0.0.99rc0 Requires-Python >=3.8.1,<4.0 ERROR: Could not find a version that satisfies the requirement langchain==0.0.101 (from versions: 0.0.1, 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.14, 0.0.15, 0.0.16, 0.0.17, 0.0.18, 0.0.19, 0.0.20, 0.0.21, 0.0.22, 0.0.23, 0.0.24, 0.0.25, 0.0.26, 0.0.27) ERROR: No matching distribution found for langchain==0.0.101
here are my conda environment in Windows 11 with python3.8 @jackylee1 :
name: py38
channels:
- msys2
- defaults
dependencies:
- ca-certificates=2023.01.10=haa95532_0
- libffi=3.4.2=hd77b12b_6
- libpython=2.1=py38_0
- m2w64-binutils=2.25.1=5
- m2w64-bzip2=1.0.6=6
- m2w64-crt-git=5.0.0.4636.2595836=2
- m2w64-gcc=5.3.0=6
- m2w64-gcc-ada=5.3.0=6
- m2w64-gcc-fortran=5.3.0=6
- m2w64-gcc-libgfortran=5.3.0=6
- m2w64-gcc-libs=5.3.0=7
- m2w64-gcc-libs-core=5.3.0=7
- m2w64-gcc-objc=5.3.0=6
- m2w64-gmp=6.1.0=2
- m2w64-headers-git=5.0.0.4636.c0ad18a=2
- m2w64-isl=0.16.1=2
- m2w64-libiconv=1.14=6
- m2w64-libmangle-git=5.0.0.4509.2e5a9a2=2
- m2w64-libwinpthread-git=5.0.0.4634.697f757=2
- m2w64-make=4.1.2351.a80a8b8=2
- m2w64-mpc=1.0.3=3
- m2w64-mpfr=3.1.4=4
- m2w64-pkg-config=0.29.1=2
- m2w64-toolchain=5.3.0=7
- m2w64-tools-git=5.0.0.4592.90b8472=2
- m2w64-windows-default-manifest=6.4=3
- m2w64-winpthreads-git=5.0.0.4634.697f757=2
- m2w64-zlib=1.2.8=10
- msys2-conda-epoch=20160418=1
- openssl=1.1.1t=h2bbff1b_0
- pip=23.0.1=py38haa95532_0
- python=3.8.16=h6244533_3
- sqlite=3.41.2=h2bbff1b_0
- vc=14.2=h21ff451_1
- vs2015_runtime=14.27.29016=h5e58377_2
- wheel=0.38.4=py38haa95532_0
- pip:
- absl-py==1.4.0
- accelerate==0.18.0
- addict==2.4.0
- aiofiles==23.1.0
- aiohttp==3.8.4
- aiosignal==1.3.1
- altair==4.2.2
- antlr4-python3-runtime==4.9.3
- anyio==3.6.2
- async-timeout==4.0.2
- attrs==23.1.0
- bitsandbytes==0.38.1
- blis==0.7.9
- boto3==1.26.117
- botocore==1.29.117
- braceexpand==0.1.7
- cachetools==5.3.0
- catalogue==2.0.8
- certifi==2022.12.7
- charset-normalizer==3.1.0
- click==7.1.2
- colorama==0.4.6
- contourpy==1.0.7
- cycler==0.11.0
- cymem==2.0.7
- cython==0.29.34
- dataclasses-json==0.5.7
- decord==0.6.0
- detectron2==0.6
- einops==0.6.1
- en-core-web-sm==3.0.0
- entrypoints==0.4
- fairscale==0.4.4
- fastapi==0.95.1
- ffmpy==0.3.0
- filelock==3.12.0
- fonttools==4.39.3
- frozenlist==1.3.3
- fsspec==2023.4.0
- future==0.18.3
- google-auth==2.17.3
- google-auth-oauthlib==1.0.0
- gradio==3.27.0
- gradio-client==0.1.3
- greenlet==2.0.2
- grpcio==1.54.0
- h11==0.14.0
- httpcore==0.17.0
- httpx==0.24.0
- huggingface-hub==0.13.4
- idna==3.4
- imageio==2.27.0
- imageio-ffmpeg==0.4.8
- importlib-resources==5.12.0
- jinja2==3.1.2
- jmespath==1.0.1
- joblib==1.2.0
- jsonschema==4.17.3
- kiwisolver==1.4.4
- langchain==0.0.101
- linkify-it-py==2.0.0
- lvis==0.5.3
- markdown==3.4.3
- markdown-it-py==2.2.0
- markupsafe==2.1.2
- marshmallow==3.19.0
- marshmallow-enum==1.5.1
- matplotlib==3.7.1
- mdit-py-plugins==0.3.3
- mdurl==0.1.2
- mmcv==2.0.0
- mmengine==0.7.2
- model-index==0.1.11
- multidict==6.0.4
- murmurhash==1.0.9
- mypy-extensions==1.0.0
- nltk==3.8.1
- numpy==1.24.2
- oauthlib==3.2.2
- omegaconf==2.3.0
- openai==0.27.4
- opencv-python==4.7.0.72
- openmim==0.3.7
- ordered-set==4.1.0
- orjson==3.8.10
- packaging==23.1
- pandas==2.0.0
- pathy==0.10.1
- pillow==9.5.0
- pkgutil-resolve-name==1.3.10
- preshed==3.0.8
- protobuf==4.22.3
- psutil==5.9.5
- pyasn1==0.5.0
- pyasn1-modules==0.3.0
- pycocotools-windows==2.0.0.2
- pydantic==1.8.2
- pydeprecate==0.3.1
- pydub==0.25.1
- pyparsing==3.0.9
- pyrsistent==0.19.3
- python-multipart==0.0.6
- pytorch-lightning==1.5.10
- pytz==2023.3
- pyyaml==6.0
- regex==2023.3.23
- requests==2.28.2
- requests-oauthlib==1.3.1
- rich==13.3.4
- rsa==4.9
- s3transfer==0.6.0
- sacremoses==0.0.53
- scipy==1.10.0
- semantic-version==2.10.0
- sentencepiece==0.1.98
- setuptools==59.5.0
- simplet5==0.1.4
- six==1.16.0
- smart-open==6.3.0
- sniffio==1.3.0
- spacy==3.0.9
- spacy-legacy==3.0.12
- sqlalchemy==1.4.47
- srsly==2.4.6
- starlette==0.26.1
- tabulate==0.9.0
- tenacity==8.2.2
- tensorboard==2.12.2
- tensorboard-data-server==0.7.0
- tensorboard-plugin-wit==1.8.1
- termcolor==2.2.0
- thinc==8.0.17
- timm==0.4.12
- tokenizers==0.13.3
- tomli==2.0.1
- toolz==0.12.0
- torch==1.13.1
- torchmetrics==0.11.4
- torchvision==0.14.1
- tqdm==4.65.0
- transformers==4.16.2
- typer==0.3.2
- typing-extensions==4.5.0
- typing-inspect==0.8.0
- tzdata==2023.3
- uc-micro-py==1.0.1
- urllib3==1.26.15
- uvicorn==0.21.1
- wasabi==0.10.1
- webdataset==0.2.48
- websockets==11.0.2
- werkzeug==2.2.3
- wget==3.2
- yapf==0.33.0
- yarl==1.8.2
- zipp==3.15.0
prefix: C:\Users\pjlab\anaconda3\envs\py38
langchain
I am using Windows 11 and python3.8.16; langchian==0.0.101 exists.
俺也一样,出现各种依赖冲突
made the python3.8.16 and install all except the last two en_core_web_sm and detectron2
made the python3.8.16 and install all except the last two en_core_web_sm and detectron2
Thanks for your feedback! I fixed the requirements.txt and added the instructions for detectron2 and en_core_web_sm in the installation part.
NFO: pip is looking at multiple versions of wget to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of torch to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of fairscale to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of transformers to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of timm to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of <Python from Requires-Python> to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of langchain to determine which version is compatible with other requirements. This could take a while. ERROR: Cannot install -r requirements.txt (line 6) and transformers==4.28.1 because these package versions have conflicting dependencies.
The conflict is caused by: The user requested transformers==4.28.1 simplet5 0.1.4 depends on transformers==4.16.2 The user requested transformers==4.28.1 simplet5 0.1.3 depends on transformers==4.10.0 The user requested transformers==4.28.1 simplet5 0.1.2 depends on transformers==4.6.1 The user requested transformers==4.28.1 simplet5 0.1.1 depends on transformers==4.8.2 The user requested transformers==4.28.1 simplet5 0.1.0 depends on transformers==4.6.1 The user requested transformers==4.28.1 simplet5 0.0.9 depends on transformers==4.6.1 The user requested transformers==4.28.1 simplet5 0.0.7 depends on transformers==4.6.1
To fix this you could try to:
- loosen the range of package versions you've specified
- remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts (videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM>
哎
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' Collecting git+https://github.com/facebookresearch/detectron2.git Cloning https://github.com/facebookresearch/detectron2.git to c:\users\administrator\appdata\local\temp\pip-req-build-x6pu3ck0 Running command git clone --filter=blob:none --quiet https://github.com/facebookresearch/detectron2.git 'C:\Users\Administrator\AppData\Local\Temp\pip-req-build-x6pu3ck0' fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': OpenSSL SSL_read: Connection was reset, errno 10054 fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': Failed to connect to github.com port 443 after 21084 ms: Timed out error: unable to read sha1 file of .clang-format (39b1b3d603ed0cf6b7f94c9c08067f148f35613f) fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': OpenSSL SSL_read: Connection was reset, errno 10054 error: unable to read sha1 file of .github/CONTRIBUTING.md (9bab709cae689ba3b92dd52f7fbcc0c6926f4a38) fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': Failed to connect to github.com port 443 after 21122 ms: Timed out
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' Collecting git+https://github.com/facebookresearch/detectron2.git Cloning https://github.com/facebookresearch/detectron2.git to c:\users\administrator\appdata\local\temp\pip-req-build-x6pu3ck0 Running command git clone --filter=blob:none --quiet https://github.com/facebookresearch/detectron2.git 'C:\Users\Administrator\AppData\Local\Temp\pip-req-build-x6pu3ck0' fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': OpenSSL SSL_read: Connection was reset, errno 10054 fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': Failed to connect to github.com port 443 after 21084 ms: Timed out error: unable to read sha1 file of .clang-format (39b1b3d603ed0cf6b7f94c9c08067f148f35613f) fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': OpenSSL SSL_read: Connection was reset, errno 10054 error: unable to read sha1 file of .github/CONTRIBUTING.md (9bab709cae689ba3b92dd52f7fbcc0c6926f4a38) fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': Failed to connect to github.com port 443 after 21122 ms: Timed out
Try to export your proxy for git
NFO: pip is looking at multiple versions of wget to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of torch to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of fairscale to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of transformers to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of timm to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of <Python from Requires-Python> to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of langchain to determine which version is compatible with other requirements. This could take a while. ERROR: Cannot install -r requirements.txt (line 6) and transformers==4.28.1 because these package versions have conflicting dependencies.
The conflict is caused by: The user requested transformers==4.28.1 simplet5 0.1.4 depends on transformers==4.16.2 The user requested transformers==4.28.1 simplet5 0.1.3 depends on transformers==4.10.0 The user requested transformers==4.28.1 simplet5 0.1.2 depends on transformers==4.6.1 The user requested transformers==4.28.1 simplet5 0.1.1 depends on transformers==4.8.2 The user requested transformers==4.28.1 simplet5 0.1.0 depends on transformers==4.6.1 The user requested transformers==4.28.1 simplet5 0.0.9 depends on transformers==4.6.1 The user requested transformers==4.28.1 simplet5 0.0.7 depends on transformers==4.6.1
To fix this you could try to:
- loosen the range of package versions you've specified
- remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts (videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM>
哎
You can loosen the transformers version in requirements.txt if you only install video_chat.
Try to export your proxy for git
不行,哎真是气死人啊
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> python setup.py build --force develop
C:\Users\Administrator\miniconda3\envs\videochat\python.exe: can't open file 'D:\askmeany\ask-anything\video_chat_with_StableLM\setup.py': [Errno 2] No such file or directory
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> cd detectron2
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM\detectron2> python setup.py build --force develop
Traceback (most recent call last):
File "D:\askmeany\ask-anything\video_chat_with_StableLM\detectron2\setup.py", line 11, in
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> python setup.py build --force develop C:\Users\Administrator\miniconda3\envs\videochat\python.exe: can't open file 'D:\askmeany\ask-anything\video_chat_with_StableLM\setup.py': [Errno 2] No such file or directory (videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> cd detectron2 (videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM\detectron2> python setup.py build --force develop Traceback (most recent call last): File "D:\askmeany\ask-anything\video_chat_with_StableLM\detectron2\setup.py", line 11, in
from torch.utils.cpp_extension import CUDA_HOME, CppExtension, CUDAExtension File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\utils_init_.py", line 4, in from .throughput_benchmark import ThroughputBenchmark File "C:\Users\Administrator \miniconda3\envs\videochat\lib\site-packages\torch\utils\throughput_benchmark.py", line 2, in import torch._C ModuleNotFoundError: No module named 'torch._C' 真的会吐血
reinstall your pytorch, doyou have GPU in your machine?
Yes
[INFO] initialize InternVideo model success!
Traceback (most recent call last):
File "D:\askmeany\ask-anything\video_chat_with_StableLM\app.py", line 33, in
以上卸载重装 pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
[INFO] initialize InternVideo model success!
[INFO] initialize dense caption model success!
Starting to load the model to memory
Traceback (most recent call last):
File "D:\askmeany\ask-anything\video_chat_with_StableLM\app.py", line 36, in
raise KeyError(key)KeyError: 'gpt_neox'
which transforms version are you used? In chat_video_with_stablelm
and chat_video_with_moss you should install latest version 4.28.1.
i put project in d driver,why the model download to c?
(videochat) PS D:\askmeany\Ask-Anything\video_chat_with_StableLM> python app.py load checkpoint from pretrained_models/tag2text_swin_14m.pth [INFO] initialize caption model success! Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Use checkpoint: False Checkpoint number: [0] Drop path rate: 0.0 Drop path rate: 0.0 Drop path rate: 0.0 Drop path rate: 0.0 [INFO] initialize InternVideo model success! [INFO] initialize dense caption model success! Starting to load the model to memory Downloading (…)l-00003-of-00004.bin: 100%|████████████████████████████████████████| 9.75G/9.75G [14:30<00:00, 11.2MB/s] Downloading (…)l-00004-of-00004.bin: 100%|████████████████████████████████████████| 2.45G/2.45G [03:37<00:00, 11.3MB/s] Downloading shards: 100%|███████████████████████████████████████████████████████████████| 4/4 [18:09<00:00, 272.36s/it] Loading checkpoint shards: 25%|██████████████▎ | 1/4 [00:16<00:48, 16.12s/it] ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\modeling_utils.p │ │ y:442 in load_state_dict │ │ │ │ 439 │ │ │ ) │ │ 440 │ │ return safe_load_file(checkpoint_file) │ │ 441 │ try: │ │ ❱ 442 │ │ return torch.load(checkpoint_file, map_location="cpu") │ │ 443 │ except Exception as e: │ │ 444 │ │ try: │ │ 445 │ │ │ with open(checkpoint_file) as f: │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\serialization.py:797 in │ │ load │ │ │ │ 794 │ │ │ # If we want to actually tail call to torch.jit.load, we need to │ │ 795 │ │ │ # reset back to the original position. │ │ 796 │ │ │ orig_position = opened_file.tell() │ │ ❱ 797 │ │ │ with _open_zipfile_reader(opened_file) as opened_zipfile: │ │ 798 │ │ │ │ if _is_torchscript_zip(opened_zipfile): │ │ 799 │ │ │ │ │ warnings.warn("'torch.load' received a zip file that looks like a To │ │ 800 │ │ │ │ │ │ │ │ " dispatching to 'torch.jit.load' (call 'torch.jit.loa │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\serialization.py:283 in │ │ init │ │ │ │ 280 │ │ 281 class _open_zipfile_reader(_opener): │ │ 282 │ def init(self, name_or_buffer) -> None: │ │ ❱ 283 │ │ super().init(torch._C.PyTorchFileReader(name_or_buffer)) │ │ 284 │ │ 285 │ │ 286 class _open_zipfile_writer_file(_opener): │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
During handling of the above exception, another exception occurred:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\modeling_utils.p │
│ y:446 in load_state_dict │
│ │
│ 443 │ except Exception as e: │
│ 444 │ │ try: │
│ 445 │ │ │ with open(checkpoint_file) as f: │
│ ❱ 446 │ │ │ │ if f.read(7) == "version": │
│ 447 │ │ │ │ │ raise OSError( │
│ 448 │ │ │ │ │ │ "You seem to have cloned a repository without having git-lfs ins │
│ 449 │ │ │ │ │ │ "git-lfs and run git lfs install followed by git lfs pull in │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\codecs.py:322 in decode │
│ │
│ 319 │ def decode(self, input, final=False): │
│ 320 │ │ # decode input (taking the buffer into account) │
│ 321 │ │ data = self.buffer + input │
│ ❱ 322 │ │ (result, consumed) = self._buffer_decode(data, self.errors, final) │
│ 323 │ │ # keep undecoded input until the next call │
│ 324 │ │ self.buffer = data[consumed:] │
│ 325 │ │ return result │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 128: invalid start byte
During handling of the above exception, another exception occurred:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in
In huggingface, the default cache directory is ~/.cache/huggingface/. Change the cache location by setting the shell environment variable, TRANSFORMERS_CACHE to another directory:
export TRANSFORMERS_CACHE="/path/to/another/directory"
or
change https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat_with_StableLM/stablelm.py#L30 to
self.m = AutoModelForCausalLM.from_pretrained(
"stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16, cache_dir='./')).cuda()
self.tok = AutoTokenizer.from_pretrained("stabilityai/stablelm-tuned-alpha-7b", cache_dir='./')
这个模型跟 官方项目huggingface上下下来的模型大小怎么好像不一样
Drop path rate: 0.0
Drop path rate: 0.0
Drop path rate: 0.0
Drop path rate: 0.0
[INFO] initialize InternVideo model success!
[INFO] initialize dense caption model success!
Starting to load the model to memory
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [01:07<00:00, 16.99s/it]
Downloading (…)neration_config.json: 100%|████████████████████████████████████████████| 111/111 [00:00<00:00, 55.5kB/s]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in param_applied, so we have to use │
│ 818 │ │ │ # with torch.no_grad(): │
│ 819 │ │ │ with torch.no_grad(): │
│ ❱ 820 │ │ │ │ param_applied = fn(param) │
│ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │
│ 822 │ │ │ if should_use_set_data: │
│ 823 │ │ │ │ param.data = param_applied │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │
│ 5 in
我需要多大内存呢?8g不够吗
[INFO] initialize InternVideo model success!
[INFO] initialize dense caption model success!
Starting to load the model to memory
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:48<00:00, 12.04s/it]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in param_applied, so we have to use │
│ 818 │ │ │ # with torch.no_grad(): │
│ 819 │ │ │ with torch.no_grad(): │
│ ❱ 820 │ │ │ │ param_applied = fn(param) │
│ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │
│ 822 │ │ │ if should_use_set_data: │
│ 823 │ │ │ │ param.data = param_applied │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │
│ 5 in
8G都不行吗?
[INFO] initialize InternVideo model success! [INFO] initialize dense caption model success! Starting to load the model to memory Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:48<00:00, 12.04s/it] ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in │ │ │ │ 33 dense_caption_model.initialize_model() │ │ 34 print("[INFO] initialize dense caption model success!") │ │ 35 │ │ ❱ 36 bot = StableLMBot() │ │ 37 │ │ 38 def inference(video_path, input_tag, progress=gr.Progress()): │ │ 39 │ data = loadvideo_decord_origin(video_path) │ │ │ │ D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py:30 in init │ │ │ │ 27 class StableLMBot: │ │ 28 │ def init(self): │ │ 29 │ │ print(f"Starting to load the model to memory") │ │ ❱ 30 │ │ self.m = AutoModelForCausalLM.from_pretrained( │ │ 31 │ │ │ "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda() │ │ 32 │ │ self.tok = AutoTokenizer.from_pretrained("stabilityai/stablelm-tuned-alpha-7b") │ │ 33 │ │ self.generator = pipeline('text-generation', model=self.m, tokenizer=self.tok, d │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │ │ 5 in cuda │ │ │ │ 902 │ │ Returns: │ │ 903 │ │ │ Module: self │ │ 904 │ │ """ │ │ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │ │ 906 │ │ │ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │ │ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:82 │ │ 0 in _apply │ │ │ │ 817 │ │ │ # track autograd history of
param_applied, so we have to use │ │ 818 │ │ │ #with torch.no_grad():│ │ 819 │ │ │ with torch.no_grad(): │ │ ❱ 820 │ │ │ │ param_applied = fn(param) │ │ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │ │ 822 │ │ │ if should_use_set_data: │ │ 823 │ │ │ │ param.data = param_applied │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │ │ 5 in │ │ │ │ 902 │ │ Returns: │ │ 903 │ │ │ Module: self │ │ 904 │ │ """ │ │ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │ │ 906 │ │ │ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │ │ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 8.00 GiB total capacity; 6.93 GiB already allocated; 0 bytes free; 7.13 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF8G都不行吗?
In chat_video, GPU memory should be at least 12G. StableLM and MOSS may use more GPU memory. We only test them in 80G A100.