Dina Suehiro Jones
Dina Suehiro Jones
I think I figured out the error with the inputs. I think it might have had to do with the fact that I'm starting from a saved_model.pb which has inputs...
@ftian1 Thanks, your response helped me to figure out a bit what was going on. Since I switched to follow the example from `neural-compressor/examples/engine/nlp/bert_base_mrpc` instead of the example from `neural-compressor/examples/tensorflow/nlp/bert_base_mrpc`,...
@tfboyd I posted the PR. 😄
The BERT large squad training log will have values like `INFO:tensorflow:examples/sec: ...`. This number can be multiplied by the number of MPI processes (in your example, that's 2 since you...
@zhixingheyi-tian The [documentation here](https://software.intel.com/content/www/us/en/develop/articles/intel-optimization-for-tensorflow-installation-guide.html) has a section called "sanity check" with a sample script on how to do this. Using info from that script with `pip install tensorflow==2.5.0` and `pip...
@zhixingheyi-tian There's a table in the [documentation here](https://software.intel.com/content/www/us/en/develop/articles/intel-optimization-for-tensorflow-installation-guide.html) under the section called "Differences between Intel Optimization for Tensorflow and official TensorFlow for running on Intel CPUs after v2.5" that compares...
Thanks @shailensobhee. These will get fixed in our next release.
@shailensobhee These links should be working now
@0400H We have a TF serving client for Wide & Deep large dataset here: [run_tf_serving_client.py](https://github.com/IntelAI/models/blob/master/k8s/recommendation/tensorflow/wide_deep_large_ds/training/fp32/run_tf_serving_client.py). It's used as part of our Kubernetes pipeline that does Wide & Deep large dataset...
@rkazants I don't know the answer to your question, but I'll try to find someone who can help