macro-regulator
macro-regulator copied to clipboard
Official implementation of NeurIPS'24 paper "Reinforcement Learning Policy as Macro Regulator Rather than Macro Placer".
NeurIPS'24 Reinforcement Learning Policy as Macro Regulator Rather than Macro Placer
Official implementation of NeurIPS'24 paper "Reinforcement Learning Policy as Macro Regulator Rather than Macro Placer"
This repository contains the Python code for MaskRegulate, a reinforcement learning implementation for macro placement. Formulated as a regulator rather than a placer and equipped with RegularMask, MaskRegulate empirically achieves significant improvements over previous methods.
Requirements
- python==3.8.5
- torch==1.7.1
- torchvision==0.8.2
- torchaudio==0.7.2
- pyyaml==5.3.1
- gym==0.22.0
- Shapely==2.0.4
- matplotlib==3.4.3
- cairocffi==1.7.0
- tqdm==4.61.2
- tensorboard==2.14.0
- scikit_learn==1.3.2
- numpy==1.21.2
File structure
-
benchmarkdirectory stores the benchmarks for running. Please download the ICCAD2015 benchmark and move it tobenchmark/(i.e.,benchmark/superblue1). -
configstores the hyperparameters for our algorithm. -
DREAMPlace_sourceserves as a thirdparty standard cell placer borrowed from DREAMPlace. -
policystores a pretrained policy trained onsuperblue1,superblue3,superblue4andsuperblue5. -
srccontains the source code of MaskRegulate. -
utilsdefines some functions to be used for optimization.
Usage
Please first download the docker image from Baidu Netdisk or pull it from the cloud duketomlist/macro-regulator.
docker pull duketomlist/macro-regulator:cuda
Then, please compile DREAMPlace_source in the docker container following the below commands:
cd DREAMPlace_source
mkdir build
cd build
cmake .. -DCMAKE_INSTALL_PREFIX=../../DREAMPlace
make
make install
After that, please download the ICCAD2015 benchmark via Google Drive.
Parameters
-
--seedrandom seed for running. -
--gpuGPU ID used by the algorithm execution. -
--episodenumber of episodes for training. -
--checkpoint_paththe saved model to be loaded. -
--eval_policyonly evalute the policy given by--checkpoint_path. -
--dataset_paththe placement file to regulate. MaskRegulate will improve the chip layout obtained from DREAMPlace if--dataset_pathis not provided. Currently, MaskRegulate only supports training on a single benchmark when a--dataset_pathis provided (i.e., ifsuperblue1_reference.defis given, please set--benchmark_train=[superblue1]and--dataset_path=./superblue1_reference.def).
Run a training task
Please first navigate to the src directory.
python main.py --benchmark_train=[Benchmark1,Benchmark2] --benchmark_eval=[Benchmark1',Benchmark2']
-
--benchmark_traincontains the benchmarks to train on. -
--benchmark_evalcontains the benchmarks to evaluate on.
For example, if you want to train MaskRegulate on benchmark superblue1, superblue3 and evaluate the performance on superblue1, superblue3, superblue5, run our command as shown bellow:
python main.py --benchmark_train=[superblue1,superblue3] --benchmark_eval=[superblue1,superblue3,superblue5]
Script run_train.sh is provided for a quick start.
Run a testing task
We also provide a pre-trained model trained on superblue1, superblue3, superblue4 and superblue5 in policy/pretrained_model.pkl, which can be loaded and evaluated. For example, run the following command to test our policy on superblue1:
python main.py --benchmark_train=[] --benchmark_eval=[superblue1] --check_point_path=../policy/pretrained_model.pkl --eval_policy=True
Script run_test.sh is provided for a quick start.
Citation
@inproceedings{macro-regulator,
author = {Ke Xue, Ruo-Tong Chen, Xi Lin, Yunqi Shi, Shixiong Kai, Siyuan Xu, Chao Qian.},
title = {Reinforcement Learning Policy as Macro Regulator Rather than Macro Placer},
booktitle = {Advances in Neural Information Processing Systems 38 (NeurIPS’24)},
year = {2024},
pages = {140565--140588},
address={Vancouver, Canada}
}