Category: B1 (Bonus); Team name: DLLB; Dataset: Reddit
Co-authored-by: luka-benic [email protected] Co-authored-by: dleko11 [email protected]
Checklist
- [x] My pull request has a clear and explanatory title.
- [x] My pull request passes the Linting test.
- [x] I added appropriate unit tests and I made sure the code passes all unit tests. (refer to comment below)
- [x] My PR follows PEP8 guidelines. (refer to comment below)
- [x] My code is properly documented, using numpy docs conventions, and I made sure the documentation renders properly.
- [x] I linked to issues and PRs that are relevant to this PR.
Description
This PR introduces a complete on-disk data loading pipeline for transductive datasets, together with higher-order structure comparison utilities and integration with the memory profiling tools.
The goal is to enable scalable training on large graphs while preserving global topological information when using topological liftings.
Our solution is based on the Cluster-GCN algorithm proposed in Chiang et al., Cluster-GCN: An Efficient Algorithm for Training Deep and Large GNNs (KDD 2019). This approach is also used in PyTorch Geometric’s ClusterData implementation.
Key Contributions
1. Local Liftings Lose Global Structure
If liftings are applied inside each cluster independently, the following issues arise:
- each cluster produces only local higher-order structures,
- any structure that spans multiple clusters is lost,
- when clusters are later collated during batching, their lifted structures remain disjoint,
- the dataloader has no mechanism to merge these local liftings into a global, consistent structure.
In short: lifting before batching destroys global higher-order structure.
2. Our Solution: Batch First, Lift Second
We redesigned the transductive pipeline to preserve global topology:
- The Cluster-GCN loader samples several clusters per batch.
- These clusters are collated into a single induced subgraph.
- Liftings are applied to the collated batch, not to individual clusters.
This enables:
- discovery of structures spanning multiple clusters if they co-occur in a batch,
- randomized batching across epochs to progressively recover more global structure,
- compatibility with both standard GNNs and higher-order models (hypergraphs, cell complexes, simplicial complexes).
We sweep two key parameters:
-
num_parts: number of METIS partitions, -
batch_size: number of partitions grouped before lifting.
3. Structure Comparison Utilities
To quantify how much global information is preserved, we implement:
- global lifting computation (golden structures),
- batch-level lifting under Cluster-GCN sampling,
- epoch-wise and cumulative recall metrics:
$$ \text{Recall}_1 = \frac{|G \cap C^{(1)}|}{|G|},\qquad \text{Recall}n = \frac{ \left| G \cap \bigcup{i=1}^n C^{(i)} \right| }{|G|}. $$
This allows us to rigorously evaluate how well the on-disk pipeline recovers global higher-order structures over time.
Testing & Validation
- The new pipeline passes the existing pipeline and loader test suite.
- Extensive experiments were run on the following datasets:
-
graph/cora -
graph/pubmed -
graph/reddit
-
- Models used for testing:
-
graph/gcn -
hypergraph/edgnn -
cell/topotune -
simplicial/topotune
-
Memory behavior was verified using the profiling utilities, and the recall/convergence experiments were reproduced using the new sweep scripts.
Details are available in the tutorial_on_disk_transductive_pipeline.ipynb notebook.
Check out this pull request on ![]()
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
Added unit tests; fixed minor path issues; improved tutorial.