dgl icon indicating copy to clipboard operation
dgl copied to clipboard

Feature issues in the whole map classification task

Open CQU-Daniel opened this issue 3 years ago • 1 comments

In the Graph Classification task, I set two features with different names for graph g**(such as g.ndata['x'] and g.ndata['y'])**, and if they are independent of GraphConv and mean_nodes operations, how do I perform my Graph Classification task at the last nn.linear layer?

CQU-Daniel avatar Oct 06 '22 13:10 CQU-Daniel

Hi, @CQU-Daniel , my understanding is you will train two embedding for each node, and you want to use both of them for latest readout. The simplest way I think is to concatenate them as input feature, and of course you can use more complicated method to fuse two embedding.

peizhou001 avatar Oct 10 '22 05:10 peizhou001

Hi, @CQU-Daniel , my understanding is you will train two embedding for each node, and you want to use both of them for latest readout. The simplest way I think is to concatenate them as input feature, and of course you can use more complicated method to fuse two embedding.

Yes, I thought the same as you at first, but this also poses a problem, because each layer of graph convolution has a different dimension, and I can't track the changes in the characteristics of the nodes at each layer and separate them in the final linear classification layer. My idea now is to split the x and y features into two graphs for training (guaranteed adjacency matrices are the same). Is there a better way for you?

CQU-Daniel avatar Oct 17 '22 07:10 CQU-Daniel

This issue has been automatically marked as stale due to lack of activity. It will be closed if no further activity occurs. Thank you

github-actions[bot] avatar Nov 17 '22 01:11 github-actions[bot]

This issue has been automatically marked as stale due to lack of activity. It will be closed if no further activity occurs. Thank you

github-actions[bot] avatar Mar 17 '23 01:03 github-actions[bot]

Hi @CQU-Daniel, really sorry for missing your question! For your question, as you want to totally separate 2 features along all operations, you can train them independently and then combine them together to get the final embedding for loss calculation and backward propagation.

If you have any further questions, please let me know!

peizhou001 avatar Apr 03 '23 05:04 peizhou001