tc-wolf
tc-wolf
@gbaned - No problem, I rebased off of master and AFAICT there shouldn't be merge conflicts with master! Is that what we want, or should I rebase vs. the `v2.9.1`...
I think a better approach would be to add a reshape op like we do in the builder for `Rsqrt`. I've tested and manually reshaping to rank-4 inputs *does* work...
Looks like fixed by [70c50e734eac50e67317614413da04ea53acd528](https://github.com/tensorflow/tensorflow/commit/70c50e734eac50e67317614413da04ea53acd528)
Thank you for looking into this :)
I'm also interested, since it looks like in some of the [examples](https://github.com/quic/aimet/blob/develop/Examples/tensorflow/quantization/keras/keras_transformer_qat.ipynb) use Keras models without the sessions-based API, but the Tensorflow [adaround](https://github.com/quic/aimet/blob/develop/Examples/tensorflow/quantization/adaround.ipynb) and other examples use ```python import tensorflow.compat.v1...
I'll look into #1527 and see if that works, thanks!
I've tried that workaround, but had to also comment out the code defining `NATIVE_FP16 1` in [half.h](https://github.com/pytorch/executorch/blob/3a2b2e8/runtime/core/portable_type/half.h#L25-L33): ``` /* #if defined(__GNUC__) || defined(__clang__) #if defined(__aarch64__) #ifndef __ARM_V8_ONLY__ #define NATIVE_FP16 1...
Slow but steady progress - ran into https://github.com/pytorch/executorch/issues/2955 (think that the instructions in setup.md have to be updated to add `-DEXECUTORCH_BUILD_SDK=ON` to the first build step), but after specifying that...
Can confirm that can successfully export and run models with `qnn-executor-runner`, though the outputs I'm getting from the converted VIT model don't look close to the original's. I'll debug that...
Thanks for the information, that's very helpful!