EyeDeck

Results 37 comments of EyeDeck

Hmm, it appears to be related to reuse of earlier temp variables created for the same function. ``` scriptname boolTestC struct testStruct bool b endStruct Function TestBoolStruct() testStruct structA =...

Ah well, I wouldn't worry about it too much then. I doubt anyone other than me has ever run into this, and Caprica is still so criminally underrated that I'd...

Not sure if I'm reading this wrong, but it _should_ already work that way, at least it does for me. If you make a batch of 10 images with a...

The enchantment _does_ work, but the grinder's missing code to read/write an appropriate NBT tag, so the enchantment is lost every time the TE unloads. Working example, the woodcutter: https://github.com/ReikaKalseki/RotaryCraft/blob/ff7ba46cba5134c8cea59b416703ab348424cca3/TileEntities/Farming/TileEntityWoodcutter.java#L595...

Looks like everything here is working properly, using USBHost's requantized models. However, there's currently still a small bug in GPTQ-for-LLaMa: ``` Loading llama-30b... Traceback (most recent call last): File "G:\text-generation-webui\server.py",...

Works for me. Without this patch, the latest GPTQ-for-LLaMA crashes with: ``` Traceback (most recent call last): File "/home/anon/text-generation-webui/server.py", line 912, in shared.model, shared.tokenizer = load_model(shared.model_name) File "/home/anon/text-generation-webui/modules/models.py", line 125,...

I also have 64GB of system RAM + a 3090, and I can't load 30B without allocating like 50GB of page file or swap.

Runs on my machine, but the new GPTQ-for-LLaMA code gives garbage output. Seems to either pick a token at random and just spam it endlessly, or start spewing irrelevant nonsense...

3090 in my case. If I run `llama_inference.py` directly...I'm not sure if it works either. ``` CUDA_VISIBLE_DEVICES=0 python llama_inference.py ../../models/LLaMA-30B-4bit-128g/ --wbits 4 --groupsize 128 --load ../../models/LLaMA-30B-4bit-128g/LLaMA-30B-4bit-128g-tsao.pt --text "this is llama"...