Taekmin Kim
Taekmin Kim
It might depend on training configuration or datasets. Could you provide your implementation details?
@PotekhinRoman I haven't used different types as follows ``` --inference_input_type=QUANTIZED_UINT8 \ --inference_type=FLOAT \ ``` Let's try to the same type on `inference_input_type` and `inference_type`!
* Tutorials [Part 1](https://medium.com/@tantara/adversarial-attack-part-1-a830ec92acde), [Part 2](https://medium.com/@tantara/adversarial-attack-part-2-peernets-fd5ff62818a1) * PyTorch Implementation of PeerNets [tantara/peernets-pytorch](https://github.com/tantara/peernets-pytorch) * Docker Image [tantara/peernets-pytorch](https://cloud.docker.com/repository/docker/tantara/peernets-pytorch)
One more data point for your PR. I ran your PR on my 4090 (torch==2.2.2, cuda 12.1, nvidia driver 530.30.02) * llm.c (main branch) : step 1: train loss 4.406586...
Hi @xenova , can you review PR for chrome extension example?
Hi @xenova thanks for following up. I just wanted to make this PR bare minimum for chrome extension with plasmo. I think plasmo would require more setup than your extension...
@xenova @LexiestLeszek I submitted PR for chrome extension. Feedback is always welcome https://github.com/huggingface/transformers.js-examples/pull/16