graph-learn
graph-learn copied to clipboard
Any plan to decouple TF-PS and distributed graph engine?
In the current implementation,client and server of graph co-place with TF-worker and TF-parameter-server.
When I want to use one TF-worker to train and multiple workers to sample data simultaneously(for GPU training). There will be some restrictions under the current architecture. So, any plan to decouple TF-PS and distributed graph engine to make architecture more flexible?
Good suggestion and the solution is coming.