Fixed graph, and Graph Convolutional Model
Hi, I'm enjoying reading your code and paper. I have some questions about fixed graph and something in Graph Convolutional Model.
-
What 'hop_step' means in 'graph.py -> def get_adjacency(self, A)'? So, I can't understand 'compute hop steps' and 'compute adjacency' in 'def get_adjacency(self, A)' ( Creating graph A )
-
I can't match image(equation 5,6 in your paper) shown below and 'def normalize_adjacency(self, A)' in graph.py. Especially, I can't find Lamda you explained in your paper at your code, and also, I can't understand what Lamda means. So, I can't figure out caculating normalized A

-
Is equation 7 (image shown below) in your paper coded at 'graph_conv_block'? I can't find correctly.

-
In 'xin_feeder_baidu -> getitem', you explained 'self.all_adjacency' shape is (5010, 120, 120). Do 5010 means batch_size or the number of frames? Do 120 means node(agent) number?
I'm sorry because I have many questions about creating fixed graph and the process of Graph Convolutional Model. I can't understand them although I'm reading your paper a few times, and I'am interested in studying creating graph in your paper.
Again, thanks for your code and paper.
Hi jmin0530, first thank you for your interest in our work and your valuable questions.
-
the "hop_step" is used in GRIP/layers/graph.py [line 21], np.linalg.matrix_power(A, d). In Graph Theory, adjacency matrix powers (A^hop_step) gives the number of walks of length hop_step from node i to node j. Here, we use it to figure out whether two nodes are connected within "hop_step" steps.
-
About building and normalizing the adjacency matrix, jmercat provided an alternative solution at (https://github.com/xincoder/GRIP/issues/2#issue-649859751). His implementation is easier to understand.
-
In model.py, G_{fixed} is the "pra_A" [line67, model.py] calculated outside of the model design, and the G_{train} is the "self.edge_importance". G_{fixed} + G_{train} is implemented in model.py [line75, model.py], and f_{graph} is calculated insides the "gcn()" function [line75, model.py]. The multiplication of the graph and feature f_{conv} is calculated in graph_operation_layer.py [line32].
-
5010 is the total number of training/testing data, please refer to [line93-101, data_process.py] for more details. 120 is the maximum number of observed objects (initialized in data_process.py line15).
I'm sorry for my late reply. Your comment helped me to understand what I didn't know. Thank you for your comment.
@xincoder Hi, I have some another questions.
-
What 'kc' means at graph_operation_layer.py -> 'n, kc, t, v = x.size()'? And why x is reshaped to "x.view(n, self.kernel_size, kc//self.kernel_size, t, v)?". Also, what is 'self.kernel_size'?
-
What 'einsum(x,A)' means at graph_operation_layer.py? Also before einsum, x is reshaped to (n, self.kernel_size, kc//self.kernel_size, t, v). But in einsum, x size is still (n,kc,t,v), isn't it? Why? I'm very confused about forward function at Graph_opreation_layer.py.
-
What 'in_channel' means at 'main.py' -> 'if name == 'main':' ? Is it 'c' at your paper which is set at 'c=2'? But in your code, 'in_channel' is set at 4. I can't understand between 'c' at your paper and 'in_channels' at your code. Thanks.
@jmin0530 Thank you for your interest and questions. These questions are easy to be figured out if you carefully read through our code.
- "kc" is the name of a dimension. It is just a variable.
x is reshaped to the specific shape so that we can multiply it with the adjacency matrix (A).
self.kernel_size is assigned with a value while initializing the Graph_Conv_Block (model.py line27-29) -> ConvTemporalGraphical (graph_conv_block.py line19) -> (graph_operation_layer.py line16).
-
The 'einsum' is the general einsum operation. You can see that we directly call the "torch.einsum" without using any specific self-implemented function. Please refer to the torch tutorial for the official description.
-
In main.py [line94], we select 4 feature dimensions. Thus, the input channel is 4. The released code is only for the Baidu ApolloScape competition, and if you want to apply it to other datasets, you need to modify the data loader (or data preprocessing) accordingly before feeding them into the model.