analog-nas
analog-nas copied to clipboard
Some code issues and implementation not match the paper.
Thank you for your excellent work. The code in this repository appears to be outdated, and does not match the paper. Could you please confirm? I have encountered the following issues and would appreciate your assistance in resolving them:
- reference before assignment xgboost.py: self.ranker = self.get_ranker() self.avm_predictor = self.get_avm_predictor() self.std_predictor = self.get_std_predictor() self.ranker_path = ranker_path self.avm_predictor_path = avm_predictor_path self.std_predictor_path = std_predictor_path
- The ranker xgboost model (with weight loaded) always generate 0.5 with different architecture config, could you please check again. The EA optimization towards the highest ranking does not work.
- The EA algorithm (ea_optimized.EAOptimizer) does not match the algorithm 1 announced in the paper. e.g., there is no code of selecting the top_50 population. Also, no "union" or "crossover".
Hi @blyucs Thank you for raising this issue.
- The rankers' weights are being updated. More training is done to ensure that we can handle any type of search space, including modifying any layer's hyperparameter in the resnet macro architecture.
- The union is done by directly appending the mutated architectures to the selected ones.
- There is no crossover in the optimized EA. In general, in NAS we avoid crossover, because of the high number of invalid architectures produced.