PartGS icon indicating copy to clipboard operation
PartGS copied to clipboard

[ICCV 2025] Official PyTorch Implementation of "Learning Self-supervised Part-aware 3D Hybrid Representations of 2D Gaussians and Superquadrics"

[ICCV 2025] PartGS

Self-supervised Learning of Hybrid Part-aware 3D Representation of 2D Gaussians and Superquadrics

Zhirui Gao, Renjiao Yi, Huang Yuhang, Wei Chen, Chenyang Zhu, Kai Xu

arXiv Project page Dataset

romm0

PartGS enables both the block-level and point- level part-aware reconstructions, preserving both part decomposition and reconstruction precision.

pipeline

This repository contains the official implementation of the paper: Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics, which is accepted to ICCV 2025. PartGS is a self-supervised part-aware reconstruction framework that integrates 2D Gaussians and superquadrics to parse objects and scenes into an interpretable decomposition, leveraging multi-view image inputs to uncover 3D structural information.

If you find this repository useful to your research or work, it is really appreciated to star this repository✨ and cite our paper 📚.

Feel free to contact me ([email protected]) or open an issue if you have any questions or suggestions. We are currently working on an expanded version. If you're interested, feel free to discuss with us.

🔥 See Also

You may also be interested in our other works:

  • [ICCV 2025] CurveGaussian: A novel bi-directional coupling framework between parametric curves and edge-oriented Gaussian components, enabling direct optimization of parametric curves through differentiable Gaussian splatting.

  • [TCSVT 2025] PoseProbe: A novel approach of utilizing everyday objects commonly found in both images and real life, as pose probes, to tackle few-view NeRF reconstruction using only 3 to 6 unposed scene images.

  • [CVMJ 2024] DeepTm: An accurate template matching method based on differentiable coarse-to-fine correspondence refinement, especially designed for planar industrial parts.

📢 News

  • 2025-08-21 Release the ShapeNet dataset.
  • 2025-06-27: The paper is available on arXiv.
  • 2025-06-26: PartGS is accepted to ICCV 2025.

📋 TODO

  • [x] 2025-07-10: Release the training and evaluation code.
  • [X] Release the ShapeNet dataset and training configure

🔧 Installation

# download
git clone https://github.com/zhirui-gao/PartGS.git

if you have an environment used for 3dgs, use it.

you just need install 2dgs surfel-rasterization submodule:

pip install submodules/diff-surfel-rasterization

[Optional] IF you want to render part map in our point-level optimization stage, you should install diff-surfel-rasterization_part, more detials are introduced here

pip install submodules/diff-surfel-rasterization_part

if not, create a new environment

conda env create --file environment.yml
conda activate partgs

🚀 Usage

Training

To train a scene of block-level recon, simply use

python train.py -s <path to  dataset>  -m <path to save>  -r 4 --training_type block --data_type dtu 

To train a scene of point-level recon(block-first), simply use

python train.py -s <path to  dataset> -m <path to save>  -r 4 --training_type part --data_type dtu  --quiet  --depth_ratio 1.0 --lambda_dist 1000

Commandline arguments for regularizations

--lambda_normal  # hyperparameter for normal consistency
--lambda_distortion # hyperparameter for depth distortion
--depth_ratio # 0 for mean depth and 1 for median depth, 0 works for most cases

To train a scene full recon, use

python train.py -s <path to  dataset> -m <path to save>  -r 4 --training_type all --data_type dtu  --quiet  --depth_ratio 1.0 --lambda_dist 1000

Testing

Bounded Mesh Extraction

To export a mesh within a bounded volume, simply use

python render.py -m <path to pre-trained model> -s <path to dataset> 

Commandline arguments you should adjust accordingly for meshing for bounded TSDF fusion, use

--depth_ratio # 0 for mean depth and 1 for median depth
--voxel_size # voxel size
--depth_trunc # depth truncation

Full evaluation

We provide scripts to evaluate our method of novel view synthesis and geometric reconstruction.

python scripts/dtu_eval.py --dtu <path to the preprocessed DTU dataset>   \
     --DTU_Official <path to the official DTU dataset>  \
     --output_path  <path to save training results>

📊 Dataset

DTU

For reconstruction on DTU dataset, we used the same processed dataset as 2DGS, please download the preprocessed data from Drive or Hugging Face. You also need to download the ground truth DTU point cloud.

BlendedMVS

For reconstrction on BlendedMVS, it can download from here, privoided by Neus.

ShapeNet dataset

We rendered four ShapeNet categories—gun, chair, desk, and airplane—selecting 15 instances per category. For each instance, we provide the ground-truth OBJ mesh for evaluation and 100 rendered images, split evenly into 50 training and 50 test views. You can download them from Baidu Disk or HugginFace

👀 Visual Results

DTU Dataset

40 55

ShapeNet Dataset

chair plane

👊 Application

Editing

chair

Simulation

chair

⭐ Acknowledgements

This project is built upon GaMeS and 2DGS. We thank all the authors for their great repos!

📚 Citation

If you find our work helpful, please consider citing:

@misc{gao2025selfsupervisedlearninghybridpartaware,
      title={Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics}, 
      author={Zhirui Gao and Renjiao Yi and Yuhang Huang and Wei Chen and Chenyang Zhu and Kai Xu},
      year={2025},
      eprint={2408.10789},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2408.10789}, 
}