TheMrCodes

Results 8 comments of TheMrCodes

Hi, same here as @jmfirth Im a programmer per trade and would love to have a GUI to quickly tinker with LLM pipeline ideas. For me the solution to have...

Maybe I missed it till now but is there any way to restrict the pipeline that can be used on a user base (ACLs for pipelines)? Or is this handled...

The basic build works now. 🚀 Want to test the feature again if the llama.cpp version got bumped Known Issues: - llama.cpp hangs after loading model into VRAM on Intel...

Hi there, ran in an weird but simular issue using an Arv A770 and ipex version 2.1.30+xpu ![image](https://github.com/intel/intel-extension-for-pytorch/assets/28106561/3fb3134d-5ac8-47e6-9ea8-8277aee59537) The two runs with the highest accuracy were done on my CPU...

No sorry I can't, I do not longer have the A770 installed in my PC. So I would appreciate if someone else could re-run the test with the code above

@gitchat1 Sorry to disappoint but my implementation currently only supports integrated Xe (so Intel Core Ultra line) and dedicated Intel Arc GPUs using ipex (the Intel Extension for Pytorch) For...

Note: This could be further optimized and added supported for more Intel Hardware specifics if OpenVINO is implemented. But as far as I know the whole model has to be...