Yen-Chang Hsu
Yen-Chang Hsu
Thanks for your interest in our repo. You are right. batch_size=1 and batch_size!=1 are different. That's why the default setting in the MNIST demo uses batch_size=1 (see [here](https://github.com/GT-RIPL/Continual-Learning-Benchmark/blob/master/agents/regularization.py#L110)), although empirically...
Hi, thanks for your interest in our work. We do not have the code for DGR and RtF. The two results are from the [paper](https://arxiv.org/pdf/1809.10635.pdf) which also uses the same...
1. The precision and recall of G are printed to console if you run this [script](https://github.com/GT-RIPL/L2C/blob/master/scripts/exp_unsupervised_transfer_Omniglot.sh#L2). The performance of G learned with Omniglot-bg and tested with Omniglot-eval or MNIST are...
Hi, the Omniglot has more than nine hundred classes, so the situation is the same. The imbalance is addressed by [sampling in the Omniglot dataloader](https://github.com/GT-RIPL/L2C/blob/master/dataloaders/default.py#L107). Training the G from scratch...
This [line](https://github.com/GT-RIPL/L2C/blob/master/scripts/exp_unsupervised_transfer_Omniglot.sh#L2) is the command to train function G.
Hi linhlt-it-ee, your question was answered in [another thread](https://github.com/GT-RIPL/L2C/issues/10).