Graphormer icon indicating copy to clipboard operation
Graphormer copied to clipboard

What is the maximum number of nodes a graphormer can handle?

Open skye95git opened this issue 3 years ago • 8 comments

the quadratic complexity of the self-attention module restricts Graphormer’s application on large graphs.

The paper describes graphormer as not applicable to large graphs. What is the maximum number of nodes a graphormer can handle?

skye95git avatar Apr 22 '22 07:04 skye95git

It depends your GPU memory and batchsize. For example, max_node=128 and batchsize=32 works fine under 32GB GPU memory.

zhengsx avatar Apr 22 '22 09:04 zhengsx

It depends your GPU memory and batchsize. For example, max_node=128 and batchsize=32 works fine under 32GB GPU memory.

Thanks for your reply! So is it the hardware configuration that limits the number of nodes in a graph, not the complexity of the model? However, in other data, the number of nodes often exceeds several hundred, such as AST. Does graphormer apply in this case?

skye95git avatar Apr 24 '22 08:04 skye95git

It depends your GPU memory and batchsize. For example, max_node=128 and batchsize=32 works fine under 32GB GPU memory.

Thanks for your reply! So is it the hardware configuration that limits the number of nodes in a graph, not the complexity of the model? However, in other data, the number of nodes often exceeds several hundred, such as AST. Does graphormer apply in this case?

The GPU memory usage is an integrated result casued by the model configuration, training configuration, model complexity, and of course, the hardware configuration decides how many GPU memory can you use in the training.

If you find that you meet the OOM issue during training, you can decrease your minibatch size, using accumulate training, cutoff your big graph, use a smaller model, or simply use an advanced GPU with more memory.

zhengsx avatar Apr 24 '22 15:04 zhengsx

When building a Graph with PyG, the molecular dataset already provides the data required by Torch_geometric.data.data. For example, the sample in ZINC:

{'num_atom': 24, 
'atom_type': tensor([0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 0, 0, 3, 0, 0],
       dtype=torch.int8), 
'bond_type': tensor([
        [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
        [0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 1, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
        [0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 2, 0, 0, 0, 1, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 2, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 2, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 2, 0, 1, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2],
        [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0]],
       dtype=torch.int8), 
'logP_SA_cycle_normalized': tensor([3.1382])}
x = mol['atom_type'].to(torch.long).view(-1, 1)

Node features directly use the atom_type attribute in the sample. I want to use graphormer to encode graph on a custom dataset, but my graph data doesn't have node features like zinc dataset. In this case, does it need to train the model to learn the node features and then use graphormer to obtain graph embedding?
If yes, can it be understood as similar to learning Node embedding with GCN and obtaining graph embedding by computing?
If no, how should the nodes used to construct the graph be represented?

skye95git avatar Apr 26 '22 03:04 skye95git

At the end of the paper:

Node Representation. There is a wide range of node representation tasks on graph structured data, such as finance, social network, and temporal prediction. Graphormer could be naturally used for node representation extraction with an applicable graph sampling strategy. We leave it for future work.

Getting node representations using graphormer is future work. So now graphormer can only get graph representations, , not like GCN, get the node representation and then get the graph representation?
Does the data used for training itself need to contain node features, as do zinc datasets?

skye95git avatar Apr 26 '22 10:04 skye95git

When building a Graph with PyG, the molecular dataset already provides the data required by Torch_geometric.data.data. For example, the sample in ZINC:

{'num_atom': 24, 
'atom_type': tensor([0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 0, 0, 3, 0, 0],
       dtype=torch.int8), 
'bond_type': tensor([
        [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
        [0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 1, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
        [0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 2, 0, 0, 0, 1, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 2, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 2, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 2, 0, 1, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
        [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2],
        [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0]],
       dtype=torch.int8), 
'logP_SA_cycle_normalized': tensor([3.1382])}
x = mol['atom_type'].to(torch.long).view(-1, 1)

Node features directly use the atom_type attribute in the sample. I want to use graphormer to encode graph on a custom dataset, but my graph data doesn't have node features like zinc dataset. In this case, does it need to train the model to learn the node features and then use graphormer to obtain graph embedding? If yes, can it be understood as similar to learning Node embedding with GCN and obtaining graph embedding by computing? If no, how should the nodes used to construct the graph be represented?

Hi, sorry for the late response. (Other team members have been just too busy to answer!) For this question, the answer is yes. If your custom dataset doesn't contain node features, it's still OK to use Graphormer by initializing node features to zeroes, or initializing them randomly, etc. In Graphormer we use centrality encoding (node degree information) to compute node embedding, so we can still obtain different embeddings for each node while taking the graph's structural information into account. Then the graph's embedding can be obtained by getting representation for the virtual node. I think this process has some similarity with GCN.

mavisguan avatar Jul 07 '22 02:07 mavisguan

At the end of the paper:

Node Representation. There is a wide range of node representation tasks on graph structured data, such as finance, social network, and temporal prediction. Graphormer could be naturally used for node representation extraction with an applicable graph sampling strategy. We leave it for future work.

Getting node representations using graphormer is future work. So now graphormer can only get graph representations, , not like GCN, get the node representation and then get the graph representation? Does the data used for training itself need to contain node features, as do zinc datasets?

Graphormer can get node representation (actually the model's output is node representation itself) and the training data doesn't need to contain node features.

mavisguan avatar Jul 07 '22 02:07 mavisguan

Hi,When I have obtained all node representations and want to obtain the graph representation, should I simply use an aggregation strategy?

BrainLyh avatar May 23 '23 01:05 BrainLyh