OpenGraph
OpenGraph copied to clipboard
Node Classification Results Unchanged Across Different Pre-trained Models
Hey, Node classification test results are exactly identical regardless of which pre-trained model I load, even when: Training on different datasets (gen0, gen1, gen2, custom graphs) Training for different epochs (5 vs 50) Modifying model architecture Link prediction results show these models ARE different
Is this expected behavior for zero-shot evaluation, or is there an issue with how the node classification code loads/uses the pre-trained models? Looking at node_classification/main.py, the code only calls test_epoch() without training, and I'm wondering if there's a caching issue or if the model parameters aren't being properly utilized. Any guidance would be appreciated!