TheMrCodes
TheMrCodes
The goal of this pull request is to add support for Intel GPU to the build script of rllm llama.cpp This is done by - expanding the cargo feature flags...
Feature Request: Add a combo box for model selection into the settings. Why combo box? So that the user can not only select predefined models like 'gpt-3.5-turbo' or 'gpt-4' but...
**Describe the feature you'd like** Implementation for the Intel GPU backend via pytorch **Additional context** This could also be used for integrated Arc GPUs (Arrow Lake and up) Quite simple...