huiyan2021
huiyan2021
@raoberman @jkinsky @jimmytwei , please help review as well, thanks!
Thanks for reporting this issue, it can be re-produced. “Device does not exist” dues to this line: https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/models/model_utils.py#L441 “Device is not supported” when setting device=’xpu:0’ because only "xpu" is considered....
seems `torch.xpu.is_available()` works in .py script but does not work well in jupyter notebook and in .py script, the issue is that the model is too large to be fit...
@brent-elliott seems that removing libstdc++* from `/home/REDACTED/miniconda3/envs/jupyter2/lib` can fix the issue that `torch.xpu.is_available()` return false in jupyter notebook
> > @brent-elliott seems that removing libstdc++* from `/home/REDACTED/miniconda3/envs/jupyter2/lib` can fix the issue that `torch.xpu.is_available()` return false in jupyter notebook > > Thank you. Removing these files resolved the issue...
Hi @brent-elliott, this sample needs to be run using the latest main branch of intel-extension-for-transformers since there are some message format changes in openai API. Also with [#1289](https://github.com/intel/intel-extension-for-transformers/pull/1289) and [#1294](https://github.com/intel/intel-extension-for-transformers/pull/1294)
Hi @chao-camect , thanks for reporting this issue, could you share a small re-producer so that we can investigate?
could you also run this environment check script and upload the result here? thanks! https://github.com/intel/intel-extension-for-tensorflow/blob/main/tools/python/env_check.py
Hi @chao-camect , I am running the training script on Arc770 with docker that we published: https://intel.github.io/intel-extension-for-tensorflow/latest/docs/install/install_for_xpu.html#get-docker-container-from-dockerhub Total 25000 training images. Epoch 1/10000 2024-04-10 04:00:56.653123: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:117] Plugin optimizer for...
> More context: I run it inside docker. I installed the deps inside docker using following script: > > ``` > wget -qO - https://repositories.intel.com/gpu/intel-graphics.key | sudo gpg --dearmor --output...