Chun-Wei Chen
Chun-Wei Chen
> If there's something else in mapping.py that is truly needed by users, we can expose that in helper.py as well. > WDYT? That makes sense to me, but I...
> That makes sense to me, but I only have one concern that I believe external users have already started to use variables from mapping.py long time ago... IMO, it's...
> It makes sense. However, I'm afraid it introduces a new required package (psutil). What about changing the line Good catch. Didn't realize it's not included in standard Python. >...
Hi @michaelroyzen, If possible, could you please provide the failed model for me to repro? (although it's a huge one) Thanks!
Thanks for providing, but there are only original PyTorch and TF models. May I have the converted ONNX model if you have it handy?
I don't have handy onnxruntime-gpu so I tried to run your script without `--optimize_for_gpu` and with `CPUExecutionProvider`. The converted ONNX model seems fine. It would be very helpful if you...
> Did it really work for you @jcwchen? I re-ran the script without --optimize_for_gpu and with CPUExecutionProvider and it gave the same ValueError as above. Sorry I might be not...
Thanks for the details. Quick question, did you set `all_tensors_to_one_file=False` with `use_external_data_format=True`? If so, how about setting `convert_attribute=True` also? Or perhaps try smaller threshold to decouple the large ONNX model...
> convert_attribute=Truedid not work: I meant the API for onnx.save_model. Please check [here](https://github.com/onnx/onnx/blob/66480ce6ab0f4c56e27bb120f246c3ddd0d1e19e/onnx/__init__.py#L171). > However, converting the model to fp16 and then saving it did work. Good to know you...
Hi @JingyaHuang, Thanks for the updates! Have you tried size_threshold=0? I keep trying to repro this error, but I still saw different failures instead of this 2GB issue so it's...