treeaaa
treeaaa
maybe try add shm_size. refer to this https://stackoverflow.com/questions/30210362/how-to-increase-the-size-of-the-dev-shm-in-docker-container or modify shm_size with a exist container. 1. stop docker sudo systemctl stop docker 2. cd to container_path and vim hostconfig sudo...
https://github.com/QuentinFuxa/WhisperLiveKit/blob/bc7c32100f3df29c5e6d9f81a30ffb5de70737de/whisperlivekit/audio_processor.py#L79 I encountered the same issue — the server consistently crashes around the 16-minute mark. As an initial fix, I changed the following: pipe_stderr=True → pipe_stderr=False After setting pipe_stderr to...
I've encountered the same issue. What I recently discovered is that the model isn't failing to output — instead, the problem lies in the timestamp values. In this line: https://github.com/SYSTRAN/faster-whisper/blob/d3bfd0a305eb9d97c08047c82149c1998cc90fcb/faster_whisper/transcribe.py#L1676-L1682...
(translate from chinese to english) Thanks for your reply! I'd also like to share some updates from my experiments. First, I trained a very simple baseline model with default settings,...
So I'm thinking... this issue might be somewhat random, which makes it really tricky to deal with.
same erorr here. thanks
update: i modify venv/lib/python3.10/site-packages/nemo/collections/asr/parts/submodules/subsampling.py add torch.set_default_device("cuda") to temporarily get around this issue.
Please wait a moment, let me test it.
yes , it works, thank you. my device nvidia-driver 570 cuda 12.4 torch 2.5.1 python3.10 gpu:l40s
> 同樣的問題 > > **小更新:**我發現我的模型總是返回這樣的無意義訊息: > > > python -m graphrag.query --root ./myfolder --method global "主要主題是什麼" > > "主要主題是'主要主題是什麼'" > > 我發現我的本地 Ollama 實例 (0.3.0) 似乎忽略了系統提示,我透過手動將兩個提示拼接在一起使其工作: > > 文件:`/graphrag/query/structured_search/global_search/search.py `,方法:`_map_response_single_batch`...