avalanche icon indicating copy to clipboard operation
avalanche copied to clipboard

Add the possibility to pretrain on multiple tasks

Open AlbinSou opened this issue 4 years ago • 3 comments

For the moment, the nc_benchmark generator function allows for a nc_first_task option, which is good for pre-training in the class-incremental learning scenario. However, the same kind of option is not available if one wants to pretrain in the task-incremental scenario. It would be nice to have an option that could be used together with task_labels=True and allows for pretraining on multiple tasks at the same time, in a multitask training manner.

This kind of pre-training is used for instance in Lifelong Learning of Compositional Structures

A quick fix that I'm using for now, but that is breaking some things (maybe to be put in bugs?) is the following:

# Number of tasks to pretrain on
pretrain = 4
pretrain_datasets = [exp.dataset for exp in scenario.train_stream[:pretrain]]

# Modify the first experience so that it contains data of the 4 first ones
first_experience = scenario.train_stream[0]
first_experience.dataset = AvalancheConcatDataset(pretrain_datasets)

# Train on the modified first experience
cl_strategy.train(first_experience)

# Train on the rest of the experiences
for t, experience in enumerate(scenario.train_stream[pretrain:]):
    cl_strategy.train(experience)

Doing this works as intended except that it multiplies the batch_size by the number of pretraining tasks for some reason:

  • size of strategy.mb_x when pretrain=4: (256, 3, 32, 32)
  • size of strategy.mb_x when pretrain=1: (64, 3, 32, 32)

AlbinSou avatar Sep 15 '21 09:09 AlbinSou

I agree about the nc_first_task option, we should also have it for multi-task scenarios.

Your snippets seems wrong. Instead of modifying the experiences in place, it's easier to create a new benchmark by first concatenating/splitting the datasets however you like, and the using one of the generic builders, like dataset_benchmark.

If you still get an error using dataset_benchmark, feel free to open a question on the Discussions.

AntonioCarta avatar Sep 16 '21 15:09 AntonioCarta

I agree about the nc_first_task option, we should also have it for multi-task scenarios.

Your snippets seems wrong. Instead of modifying the experiences in place, it's easier to create a new benchmark by first concatenating/splitting the datasets however you like, and the using one of the generic builders, like dataset_benchmark.

If you still get an error using dataset_benchmark, feel free to open a question on the Discussions.

Yes, I agree that this is an ugly fix, I also tried like you did but the batch size is still multiplied by the number of tasks in the first experience. I think this comes from TaskBalancedDataLoader, but I don't know if it's intended that the batch size is increased that way.

AlbinSou avatar Sep 16 '21 18:09 AlbinSou

Ok, now I get it. Yes, it's normal, some of the dataloaders, like the TaskBalancedDataLoader add batch_size samples for each group (task/experience ...). Maybe we should rename the parameter to avoid confusion.

AntonioCarta avatar Sep 17 '21 07:09 AntonioCarta