Richard Vencu
Richard Vencu
when working with sessions I also found having to change headers for each call impacting the actual performance for too little gain...
I got to sustained 1G please be aware, disk write speeds are at best 2G if nvme. you may want to use 4 nvme in raid 0 to reach almost...
Thanks. Decided to switch to linux box, still waiting for the hardware delivery then I will resume fresh.
I think the problem is that I am initially calling an API endpoint, where I am authenticating with Basic authentication. I can see that the WP_Async_Request class tries to pass...
Well, I hooked an action to `load_textdomain` action hook inside the admin plugin like this: ` public function crispbot_change_locale(){ switch_to_locale( "ro_RO" ); unload_textdomain( 'crisp-bot-wp' ); load_plugin_textdomain( 'crisp-bot-wp', false, dirname( dirname(...
I believe the answer would be: make a local instance from rdb file then scandump locally and loadchunk remotely... I played with a python script that can be easily adapted...
I removed some of these basic errors, here is my last cell content. But the code breaks in cuda for some reason ``` from random import choice from pathlib import...
the file name changed so the line became ```from dalle_pytorch.tokenizer import tokenizer, HugTokenizer``` I posted the entire cell to overwrite the old one...
> The CUDA error happens when the tokenizer produces indices that are too big for the text embedding. This fixes it: > > ```python > VOCAB_SIZE = tokenizer.vocab_size > ```...