build(deps): update tensorflow requirement from <2.16.0,>=2.11.0 to >=2.11.0,<2.18.0
Updates the requirements on tensorflow to permit the latest version.
Release notes
Sourced from tensorflow's releases.
TensorFlow 2.17.0
Release 2.17.0
TensorFlow
Breaking Changes
- GPU
- Support for NVIDIA GPUs with compute capability 5.x (Maxwell generation) has been removed from TF binary distributions (Python wheels).
Major Features and Improvements
Add
is_cpu_target_available, which indicates whether or not TensorFlow was built with support for a given CPU target. This can be useful for skipping target-specific tests if a target is not supported.
tf.data
- Support
data.experimental.distribued_save.distribued_saveuses tf.data service (https://www.tensorflow.org/api_docs/python/tf/data/experimental/service) to write distributed dataset snapshots. The call is non-blocking and returns without waiting for the snapshot to finish. Settingwait=Truetotf.data.Dataset.loadallows the snapshots to be read while they are being written.Bug Fixes and Other Changes
GPU
- Support for NVIDIA GPUs with compute capability 8.9 (e.g. L4 & L40) has been added to TF binary distributions (Python wheels).
Replace
DebuggerOptionsof TensorFlow Quantizer, and migrate toDebuggerConfigof StableHLO Quantizer.Add TensorFlow to StableHLO converter to TensorFlow pip package.
TensorRT support: this is the last release supporting TensorRT. It will be removed in the next release.
NumPy 2.0 support: TensorFlow is going to support NumPy 2.0 in the next release. It may break some edge cases of TensorFlow API usage.
tf.lite
- Quantization for
FullyConnectedlayer is switched from per-tensor to per-channel scales for dynamic range quantization use case (float32inputs / outputs andint8weights). The change enables new quantization schema globally in the converter and inference engine. The new behaviour can be disabled via experimental flagconverter._experimental_disable_per_channel_quantization_for_dense_layers = True.- C API:
- The experimental
TfLiteRegistrationExternaltype has been renamed asTfLiteOperator, and likewise for the corresponding API functions.- The Python TF Lite Interpreter bindings now have an option
experimental_default_delegate_latest_featuresto enable all default delegate features.- Flatbuffer version update:
GetTemporaryPointer()bug fixed.
tf.data
- Add
waittotf.data.Dataset.load. IfTrue, for snapshots written withdistributed_save, it reads the snapshot while it is being written. For snapshots written with regularsave, it waits for the snapshot until it's finished. The default isFalsefor backward compatibility. Users ofdistributed_saveare recommended to set it toTrue.
tf.tpu.experimental.embedding.TPUEmbeddingV2
- Add
compute_sparse_core_statsfor sparse core users to profile the data with this API to get themax_idsandmax_unique_ids. These numbers will be needed to configure the sparse core embedding mid level api.- Remove the
preprocess_featuresmethod since that's no longer needed.Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Abdulaziz Aloqeely, Ahmad-M-Al-Khateeb, Akhil Goel, akhilgoe, Alexander Pivovarov, Amir Samani, Andrew Goodbody, Andrey Portnoy, Ashiq Imran, Ben Olson, Chao, Chase Riley Roberts, Clemens Giuliani, dependabot[bot], Dimitris Vardoulakis, Dragan Mladjenovic, ekuznetsov139, Elfie Guo, Faijul Amin, Gauri1 Deshpande, Georg Stefan Schmid, guozhong.zhuang, Hao Wu, Haoyu (Daniel), Harsha H S, Harsha Hs, Harshit Monish, Ilia Sergachev, Jane Liu, Jaroslav Sevcik, Jinzhe Zeng, Justin Dhillon, Kaixi Hou, Kanvi Khanna, LakshmiKalaKadali, Learning-To-Play, lingzhi98, Lu Teng, Matt Bahr, Max Ren, Meekail Zain, Mmakevic-Amd, mraunak, neverlva, nhatle, Nicola Ferralis, Olli Lupton, Om Thakkar, orangekame3, ourfor, pateldeev, Pearu Peterson, pemeliya, Peng Sun, Philipp Hack, Pratik Joshi, prrathi, rahulbatra85, Raunak, redwrasse, Robert Kalmar, Robin Zhang, RoboSchmied, Ruturaj Vaidya, sachinmuradi, Shawn Wang, Sheng Yang, Surya, Thibaut Goetghebuer-Planchon, Thomas Preud'Homme, tilakrayal, Tj Xu, Trevor Morris, wenchenvincent, Yimei Sun, zahiqbal, Zhu Jianjiang, Zoranjovanovic-Ns
Changelog
Sourced from tensorflow's changelog.
Release 2.17.0
TensorFlow
Breaking Changes
- GPU
- Support for NVIDIA GPUs with compute capability 5.x (Maxwell generation) has been removed from TF binary distributions (Python wheels).
Major Features and Improvements
Add
is_cpu_target_available, which indicates whether or not TensorFlow was built with support for a given CPU target. This can be useful for skipping target-specific tests if a target is not supported.
tf.data
- Support
data.experimental.distribued_save.distribued_saveuses tf.data service (https://www.tensorflow.org/api_docs/python/tf/data/experimental/service) to write distributed dataset snapshots. The call is non-blocking and returns without waiting for the snapshot to finish. Settingwait=Truetotf.data.Dataset.loadallows the snapshots to be read while they are being written.Bug Fixes and Other Changes
GPU
- Support for NVIDIA GPUs with compute capability 8.9 (e.g. L4 & L40) has been added to TF binary distributions (Python wheels).
Replace
DebuggerOptionsof TensorFlow Quantizer, and migrate toDebuggerConfigof StableHLO Quantizer.Add TensorFlow to StableHLO converter to TensorFlow pip package.
TensorRT support: this is the last release supporting TensorRT. It will be removed in the next release.
NumPy 2.0 support: TensorFlow is going to support NumPy 2.0 in the next release. It may break some edge cases of TensorFlow API usage.
tf.lite
- Quantization for
FullyConnectedlayer is switched from per-tensor to per-channel scales for dynamic range quantization use case (float32inputs / outputs andint8weights). The change enables new quantization schema globally in the converter and inference engine. The new behaviour can be disabled via experimental flagconverter._experimental_disable_per_channel_quantization_for_dense_layers = True.- C API:
- The experimental
TfLiteRegistrationExternaltype has been renamed asTfLiteOperator, and likewise for the corresponding API functions.- The Python TF Lite Interpreter bindings now have an option
experimental_default_delegate_latest_featuresto enable all default delegate features.- Flatbuffer version update:
GetTemporaryPointer()bug fixed.
tf.data
- Add
waittotf.data.Dataset.load. IfTrue, for snapshots written withdistributed_save, it reads the snapshot while it is being written. For snapshots written with regularsave, it waits for the snapshot until it's finished. The default isFalsefor backward compatibility. Users ofdistributed_saveare recommended to set it toTrue.
tf.tpu.experimental.embedding.TPUEmbeddingV2
- Add
compute_sparse_core_statsfor sparse core users to profile the data with this API to get themax_idsandmax_unique_ids. These numbers will be needed to configure the sparse core embedding mid level api.- Remove the
preprocess_featuresmethod since that's no longer needed.Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Abdulaziz Aloqeely, Ahmad-M-Al-Khateeb, Akhil Goel, akhilgoe, Alexander Pivovarov, Amir Samani, Andrew Goodbody, Andrey Portnoy, Ashiq Imran, Ben Olson, Chao, Chase Riley Roberts, Clemens Giuliani, dependabot[bot], Dimitris Vardoulakis, Dragan Mladjenovic, ekuznetsov139, Elfie Guo, Faijul Amin, Gauri1 Deshpande, Georg Stefan Schmid, guozhong.zhuang, Hao Wu, Haoyu (Daniel), Harsha H S, Harsha Hs, Harshit Monish, Ilia Sergachev, Jane Liu, Jaroslav Sevcik, Jinzhe Zeng, Justin Dhillon, Kaixi Hou, Kanvi Khanna, LakshmiKalaKadali, Learning-To-Play, lingzhi98, Lu Teng, Matt Bahr, Max Ren, Meekail Zain, Mmakevic-Amd, mraunak, neverlva, nhatle, Nicola Ferralis, Olli Lupton, Om Thakkar, orangekame3, ourfor, pateldeev, Pearu Peterson, pemeliya, Peng Sun, Philipp Hack, Pratik Joshi, prrathi, rahulbatra85, Raunak, redwrasse, Robert Kalmar, Robin Zhang, RoboSchmied, Ruturaj Vaidya, sachinmuradi, Shawn Wang, Sheng Yang, Surya, Thibaut Goetghebuer-Planchon, Thomas Preud'Homme, tilakrayal, Tj Xu, Trevor Morris, wenchenvincent, Yimei Sun, zahiqbal, Zhu Jianjiang, Zoranjovanovic-Ns
Release 2.16.2
Bug Fixes and Other Changes
... (truncated)
Commits
ad6d8ccMerge pull request #71345 from tensorflow-jenkins/version-numbers-2.17.0-69598ca87bfUpdate version numbers to 2.17.0b3dcff9Merge pull request #70600 from tensorflow/r2.17-2d72742d40f742ccbbAdd tensorflow support for 16k page sizes on arm648581151Merge pull request #70475 from tensorflow-jenkins/version-numbers-2.17.0rc1-8204d6b2aa0Update version numbers to 2.17.0-rc1bb8057cMerge pull request #70454 from vladbelit/gcs_trailing_dot_undo72f4b02Fix issues with TF GCS operations not working in certain environments.6ed0a1aMerge pull request #70358 from tensorflow/r2.17-b24db0b2a85ffca2f5Add backxla/stream_executor:cuda_platformtotf_additional_binary_deps.- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
-
@dependabot rebasewill rebase this PR -
@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it -
@dependabot mergewill merge this PR after your CI passes on it -
@dependabot squash and mergewill squash and merge this PR after your CI passes on it -
@dependabot cancel mergewill cancel a previously requested merge and block automerging -
@dependabot reopenwill reopen this PR if it is closed -
@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually -
@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency -
@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Bump? The pipeline is failing because tf has changes the way weights are stored and loaded. The JIT compiler is failing on Nvidia L4/L40, which require tf >=2.17.0
Bump? The pipeline is failing because tf has changes the way weights are stored and loaded. The JIT compiler is failing on Nvidia L4/L40, which require tf >=2.17.0
Unfortunately it's not only the model weights save/load format. The keras3 integration breaks mostly every model.
I have investigated already a lot of time.. at the end everything was working without errors/warnings ..but some model graphs was broken (without any visible reason)
A dirty workaround is already on hold: https://github.com/mindee/doctr/pull/1542
OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting @dependabot ignore this major version or @dependabot ignore this minor version. You can also ignore all major, minor, or patch releases for a dependency by adding an ignore condition with the desired update_types to your config file.
If you change your mind, just re-open this PR and I'll resolve any conflicts on it.