Tadas Gedgaudas
Tadas Gedgaudas
+1 we very need this
any solution for this?
I'm getting this: ``` root@personal-gpt-tasks-546548ffbb-69c85:/app# llama -m /mnt/data/weights/ggml-alpaca-7B-q4_0.bin --n_parts 1 main: seed = 1679775129 llama_model_load: loading model from '/mnt/data/weights/ggml-alpaca-7B-q4_0.bin' - please wait ... llama_model_load: n_vocab = 32000 llama_model_load: n_ctx =...
looks good :+1:
Just enable all feature flags on your current version and then do upgrade, works perfectly
twkit has transaction id implemented: https://github.com/d60/twikit/blob/main/twikit/x_client_transaction/transaction.py#L141 Could be brought here who ever is working on it
same problem. 13 million records database (~100gb) what I tried: ✅ server has enough disk space ✅ recreated the server, updated meilisearch to 1.13.3 (latest) and reupload all data (initial...
> [@ManyTheFish](https://github.com/ManyTheFish) I have resolved (rather put a plaster over it) by reducing the batch size and making reduced set of searchable fields ( im not sure if this helped)...
Seems like I found the issue - Meilisearch task processor takes too many tasks. If you upload a lot of batches, for example I had 3000 tasks with 1000 documents...
Thanks for the answer! Somehow missed the limits part Deployed now with the limits and a different tmp dir, will see if that improves anything ``` docker run -d \...