tfx icon indicating copy to clipboard operation
tfx copied to clipboard

Improve recommender tutorial by updating code to be usable outside of InteractiveContext

Open TaylorZowtuk opened this issue 2 years ago • 7 comments

URL(s) with the issue:

  • https://www.tensorflow.org/tfx/tutorials/tfx/recommenders

Description of issue (what needs changing):

The current notebook runs without issue and works as a starting point. For me (and I presume others) the next step is naturally to organize the code in a more production-like pipeline which means adapting the notebook and fitting it into something like the templates described in this guide.

However, if one adapts the recommender tutorial to run outside of InteractiveContext, then the code fails to run. In particular, the Channel's that we pass to the Trainer component are empty when the pipeline is run using LocalDagRunner. When the MovielensModel calls movies_uri.get()[0] in its constructor, the program will throw a RuntimeError because we are indexing into an empty list. This is in spite of the fact that the artifacts do exist in the local file system and previous components have run correctly.

I created a fork and (arbitrarily) pushed my code here to illustrate exactly what I am running.

You can see the logs from a run here. In particular, look from this line onwards and you will see what custom_config evaluates to and the error.

From my brief attempt at tracing through the TFX code for the Trainer component, it seems that the executors track or resolve the artifacts for the non-custom_config arguments (like examples, transform_graph, and schema) differently than the custom_config arguments. That is why train_files in run_fn() is a valid path while the custom_config values are empty Channel's. But I am uncertain of why there is a difference depending on the orchestrator used and what the correct way to resolve this is.

This is not a new confusion, as you can see others have come across the same situation as myself. Unfortunately, that question was never answered and I was also unable to find any answers in any of the TensorFlow repos/docs or other stack overflow posts. I hope that this issue can clarify the correct way to approach this situation and help others avoid the same mistake in the future.

Why this should be changed:

I would like to request that the tutorial be updated because:

  • I feel like the tutorial should teach users how to use TFX in a manner that works independently of the type of orchestrator
  • it will help users avoid facing unexpected failures when they apply what they learned from the tutorial
  • there is currently a lack of clarity on why these artifacts are empty in some situations

TaylorZowtuk avatar May 03 '23 21:05 TaylorZowtuk

@rcrowe-google I see you published the original tutorial. Would you be willing to share your thoughts on this, whether its a worthwhile improvement, and possibly clarify why the code works when using the InteractiveContext but not LocalDagRunner?

TaylorZowtuk avatar May 03 '23 21:05 TaylorZowtuk

Hi @TaylorZowtuk - Yes, it would be worthwhile to update the example to work in LocalDagRunner, and I should have written it that way in the first place. It's been on my list to update it for what seems like forever, and I just haven't had time yet. The code works in InteractiveContext because the artifacts are in memory, but they really should have been passed in Channels.

rcrowe-google avatar May 03 '23 23:05 rcrowe-google

Thanks for confirming and thanks for the clarification. I appreciate you taking the time to respond @rcrowe-google.

TaylorZowtuk avatar May 04 '23 14:05 TaylorZowtuk

Hello, I'm experiencing the same issue. Is there a workaround for this currently?

BlakeB415 avatar Aug 26 '23 04:08 BlakeB415

Do we have any progress on this issue ? I'm experiencing the same issue.

lukhaza avatar Jan 02 '24 06:01 lukhaza

@lukhaza I believe you extended the Trainer component to ingest multiple Examples channels. Can you share any of those details here?

I've extended the standard Trainer and Tuner components (as well as the google_cloud_ai_platform extensions of those components) to support additional Examples and Schema inputs. Here is a gist.

The solution is general (supporting both "item" and "query" inputs), but in the context of this thread, you would pass the dataset of unique movies into run_fn via the Trainer's item_examples and item_schema parameters, and then use fn_args.item_files, fn_args.item_schema_path and fn_args.item_data_accessor to load this dataset (just as you already use fn_args.train_files, fn_args.schema_path and fn_args.data_accessor to load your training dataset).