tensorly icon indicating copy to clipboard operation
tensorly copied to clipboard

Processing time for each iteration

Open caglayantuna opened this issue 4 years ago • 3 comments

We thought that it would be useful if Tensorly has an option to return processing time for each iteration. Decomposition functions could return that if return_time_it is True as reconstruction error. Time library can be used as follows:

time_it = []
    for iteration in range(n_iter_max):
        time_init = time.time()
        for mode in modes:
                   .
                   .
                   .
        time_it.append(time.time() - time_init)

What do you think about this new feature? If you agree, I can open a PR.

caglayantuna avatar Feb 02 '22 08:02 caglayantuna

Just to add to the discussion, as we discussed with @JeanKossaifi, maybe the output timings should be provided in a dictionary, along with e.g. the reconstruction error. But I'm afraid it would break compatibility with previous version?

cohenjer avatar Feb 09 '22 10:02 cohenjer

@cohenjer @caglayantuna I had another idea after we met about this. Perhaps rather than adding another optional return, we should add a callbacks interface (e.g., see scipy.optimize.minimize)? This would provide one solution to return time, error, or anything else. For example, we use tqdm for progress bars in our hand-rolled methods, and a callback would make it possible with Tensorly methods as well. Folks might also want to do things like store parts of the decomposition at each iteration to study convergence. Essentially this allows you to run and save whatever you want on each iteration.

aarmey avatar Feb 09 '22 15:02 aarmey

@aarmey This is a great idea in fact. I have concerns about how to return the results, see #372 I just oppened. Once we figure out how to return things properly, returning time and/or adding callback functions will be very easy.

cohenjer avatar Feb 11 '22 09:02 cohenjer