Vegax

Results 10 issues of Vegax

Everytime I try to compile any python file with pycom, it gives me this error, I think it's related to language accent marks in the system language because I haven't...

[BunJS](https://bun.sh/) is at least 3 times faster than NodeJS, could you give a try???

help wanted

Several years ago there was a WIP extension for Oauth2 in Weppy: https://github.com/gi0baro/weppy-oauth2/blob/master/weppy_oauth2/ext.py Do you think the old code is compatible beside async\await and can be resumed or have there...

question

I think it's worth adding them... reference: https://lorisleiva.com/laravel-pagination-with-tailwindcss/

enhancement

I followed every step in the installation: ``` ollama serve git clone https://github.com/stitionai/devika.git cd devika/ uv venv uv pip install -r requirements.txt cd ui/ bun install bun run dev python3...

Recently, the Xorshift128+ algorithm widely used in V8 Javascript Engine has been reverse-engineered with Z3 Theorem Prover. There are a few examples like this blog article: https://blog.securityevaluators.com/hacking-the-javascript-lottery-80cc437e3b7f and https://github.com/steven200796/xorshift128plus_exploit Could...

DirectML can run any LLM on any GPU that supports DX12: ``` if opt.use_dml: import torch_directml device = torch_directml.device(torch_directml.default_device()) else: device = torch.device('cuda' if opt.cuda else 'cpu') ```

If you play as an elf and go underwater you will notice that the oxygen bar is reduced by a quarter compared to the full bar. Underwater times for all...

missing functionality

I encountered an issue while trying to use the gpuOffload configuration as stated in the README `const llama3 = await client.llm.load(modelPath, { config: { gpuOffload: "max" } });` However, this...

[DeepSeek R1 1.58bit](https://huggingface.co/unsloth/DeepSeek-R1-GGUF) Note: Unsloth uses some sort of dynamic quantization