Metod Jazbec

Results 6 comments of Metod Jazbec

Starting with Functional Laplace.

Main changes since last time: - addressed underfitting of `FunctionalLaplace` in `regression_example.py`, fixed the issue with the marginal likelihood: https://github.com/AlexImmer/Laplace/pull/55#discussion_r833521031 - added `calibration_gp_example.py` where `FunctionalLaplace` is used on a pre-trained...

I also benchmarked the code with [BNN-predictions-repo](https://github.com/AlexImmer/BNN-predictions) in terms of speed and the code here is 2-3x slower. After investigating, I believe the main reason is that code here allows...

Improved memory performance (see [changes](https://github.com/AlexImmer/Laplace/pull/55/commits/e20ac6c56bae5fbd147e49d03948587bf865880d) in `_kernel` methods in `FunctionalLaplace` and in `jacobians` method in `BackPackInterface`). Can use larger batch sizes (e.g. `b=256` on Nvidia 3090 GPU) now which results...

Yeah, I tried doing `p.requires_grad = False` as suggested in the [subnetwork example](https://aleximmer.github.io/Laplace/#subnetwork-laplace). However, in my case I want to do **First-and-Last-Layer** Laplace approximation (e.g., as done in some experiments...

Regarding the "usefulness" of `SubnetLaplace` : using Laplace library I was able to fit first-and-last layer LA approximation on ~0.5B parameter NNs. And the use of Laplace library was crucial...