yecp

Results 13 comments of yecp

为什么会报错,因为model_type默认是base,但是模型路径是large。所以正确写法是model = FastHan(model_type="large", url="D:/code/term weight/Q2/fasthan_large")

源码里 def __init__(self,model_type='base',url=None): --------------------------------------------------- #创建新模型 if model_type=='base': layer_num=4 else: layer_num=8 所以使用教程里的模型路径的使用方法需要改一下,改为 model=FastHan(model_type="base", url="C:/Users/gzc/.fastNLP/fasthan/fasthan_base") ## model_type是必填项

还有一个问题是用户词典添加4个字的词,模型无法分出来。 from fastHan import FastHan model = FastHan(model_type="large", url="D:/code/term weight/Q2/fasthan_large") sentence = "诺基亚(NOKIA)105 新 黑色 直板按键 移动2G手机 老人老年手机 学生备用功能机 超长待机 双卡双待" answer = model(sentence) print(answer) model.add_user_dict(["功能机", "双卡双待"]) answer = model(sentence,...

Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64 [frpc_linux_amd64_v0.zip](https://github.com/THUDM/VisualGLM-6B/files/13816190/frpc_linux_amd64_v0.zip)

> 使用LLama_factory 可以部署 主要痛苦的就是安装flash_attn 请问你的transformers和flash_attn的版本是什么,我也用的是LLama_factory,遇到以下这个问题 ImportError: cannot import name 'is_flash_attn_available' from 'transformers.utils' (/sie/anaconda3/envs/yecp/lib/python3.8/site-packages/transformers/utils/__init__.py)

问题整理以及解决方法 1. 安装flash_attn的问题 答:先安装对应版本的cuda-nvcc,https://anaconda.org/nvidia/cuda-nvcc 再安装flash_attn,https://github.com/Dao-AILab/flash-attention/releases/ `pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.3.3/flash_attn-2.3.3+cu122torch2.1cxx11abiFALSE-cp38-cp38-linux_x86_64.whl` 2. ImportError: cannot import name 'is_flash_attn_available' from 'transformers.utils' 答:`pip install transformers==4.34.1`,transformers的版本必须是4.34,不能是4.31、4.32、4.33 ![image](https://github.com/OrionStarAI/Orion/assets/32784059/49b8b467-072a-4661-918b-7a68d4673199)

> ## 报错 > 复制官方的`cli_demo`运行,在` init_model()`会出现如下报错 > > ``` > ImportError: This modeling file requires the following packages that were not found in your environment: flash_attn. Run `pip install flash_attn`...

> ImportError: This modeling file requires the following packages that were not found in your environment: flash_attn. Run `pip install flash_attn` 安装flash_attn的问题 答:先安装对应版本的cuda-nvcc,https://anaconda.org/nvidia/cuda-nvcc 再安装flash_attn,https://github.com/Dao-AILab/flash-attention/releases/ `pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.3.3/flash_attn-2.3.3+cu122torch2.1cxx11abiFALSE-cp38-cp38-linux_x86_64.whl`

> > > ImportError: This modeling file requires the following packages that were not found in your environment: flash_attn. Run `pip install flash_attn` > > > > > > 安装flash_attn的问题...

> 使用的linux系统,T4卡,安装不上flash-attn库,可以绕过flash-attn库进行运行吗 安装flash_attn的问题 答:先安装对应版本的cuda-nvcc,https://anaconda.org/nvidia/cuda-nvcc 再安装flash_attn,https://github.com/Dao-AILab/flash-attention/releases/ `pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.3.3/flash_attn-2.3.3+cu122torch2.1cxx11abiFALSE-cp38-cp38-linux_x86_64.whl`