tfx icon indicating copy to clipboard operation
tfx copied to clipboard

BulkInferrer component in TFX 1.16.0

Open nking opened this issue 2 months ago • 2 comments

https://github.com/tensorflow/tfx/blob/c4230755a5453fa0625118227fb1ed1b824bd4d9/tfx/examples/penguin/penguin_pipeline_local_e2e_test.py#L256

The BulkInferrer component fails in the unit tests provided by the penguin examples.

The error is:

Could not find variable dense_6/bias. This could mean that the variable has been deleted. In TF1, it can also mean the variable is uninitialized. Debug info: container=localhost, status error message=Resource localhost/dense_6/bias/N10tensorflow3VarE does not exist. [[{{node functional_2_1/dense_6_1/Add/ReadVariableOp}}]] [while running 'RunInference[train]/RunInference/RunInferenceImpl/BulkInference/BeamML_RunInference']

Steps to reproduce:

git clone https://github.com/tensorflow/tfx.git

git checkout v1.16.0

(set up a virtual environment for python 3.10, and install dependencies in test_constraints.txt)

cd tfx/examples/penguin

python -m unittest penguin_pipeline_local_e2e_test.py

The test for which enable_bulk_inferror=True, fails

This was run on a system with Ubuntu 24.04, python 3.10.

Note that I separately checked that the saved_model could be loaded, the "serving_default" signature extracted, and that the extracted method successfully gave predictions given examples.

nking avatar Nov 15 '25 00:11 nking

Hey @nking Thanks for reporting this issue I have reproduced and got the same value error

Image

, will work on this and update you further.

bharatjetti avatar Dec 05 '25 07:12 bharatjetti

Awesome, thanks for the response!

Meanwhile I made a workaround in personal code by copying the tfx-bsl public and private run_inference scripts and added to the later, the ability to run saved_models.

nking avatar Dec 08 '25 00:12 nking