youngdev

Results 12 comments of youngdev

Thanks for reply. I've edited issue title. Waiting for Windows release! :) In the meantime i'll try out Linux

the projects use tsup for bundling so rollup is not needed, but I still can't use code from custom.js file inside options.html

can't find a way of bundling with js files. As a workaround i use .ts files with .js code

it's a problem in most detection models. You can set the threshold higher (eg. 0.80) to disable bad predictions

How's the work on the multiple participants support? That would help greatly! One of the use cases would be a live translation via agent. Most of the logic is already...

``` ./ollama serve (llm-cpp) 0 (02:29.096) 2024/08/09 15:05:11 routes.go:1028: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1...

managed to hack it with chaining the commands: `sudo kill -9 18737; ./ollama serve` the computation now runs on iGPU but doesn't seem to provide any speed benefit compared to...

i've updated the GPU drivers to 32.0.101... no improvement in performance.

EDIT: it is already implemented in `src/agent/history.js` you can use in context learning of LLMs for that. I would suggest creating a 1000 token memory space for model to write...