leuc
leuc
WIP https://github.com/leuc/org.phoboslab.wipeout
Build with GitHub CI https://github.com/leuc/org.phoboslab.wipeout/releases/download/v0.1-alpha0/wipEout-rewrite-x86_64.flatpak
Build adjusted to use `PATH_ASSETS`. Save game state is preserved now. https://github.com/leuc/org.phoboslab.wipeout/releases/download/v0.0.1-alpha2/wipEout-rewrite.v0.0.1-alpha2.flatpak
similar issue while trying to run openai-whisper on A770 ```diff from . import load_model + import intel_extension_for_pytorch as ipex model = load_model(model_name, device=device, download_root=model_dir) + model.eval() + model = model.to('xpu')...
compilation took hours and multiple attempts, but whisper is working with the xpu-master branch and even loads the large model into the 16GB VRAM. ``` $ whisper --language en --model...
above warnings go away when `ipex.optimize(model)` is omitted found a metric to display GPU memory usage using [lsgpu](https://gitlab.freedesktop.org/drm/igt-gpu-tools/-/blob/master/tools/lsgpu.c) normal usage ```sh > lsgpu -p | grep ^lmem_ lmem_avail_bytes : 16260284416...
took hours to build, so uploaded **__unofficial__** wheels of xpu-master here: https://github.com/leuc/intel-extension-for-pytorch/releases/tag/v1.13.120%2Bgit5fdf9e6
@fredlarochelle it wasn't a resource issue, but the script doesn't build well without conda. I may work on a PR for better portability, with aim for CI/CD and containers.
> what are error messages? I would recommend to do the compilation in a docker container. addressed some build issues with PR https://github.com/intel/intel-extension-for-pytorch/pull/334
``` cat /proc/device-tree/model StarFive VisionFive V2 cat /proc/device-tree/compatible starfive,visionfive-v2starfive,jh7110 cat /proc/device-tree/serial-number VF7110A1-2250-D008E000-00000824 cat /proc/cpuinfo processor : 0 hart : 1 isa : rv64imafdc mmu : sv39 uarch : sifive,u74-mc processor...