Results 24 comments of Pooneh Mousavi

> > > Code is not working for celeba. I am not able to obtain the results as reported in the paper. The image reconstruction and random sample generation both...

Thanks for the valuable pull request, @Chaanks. I checked and ran some tests, it appears that everything is functioning correctly. Moving forward with the project, let's address the following points:...

@salah-zaiem and @Chaanks, I have refactored the Diceret-Hubert to include bitrate scalability(getting tokens from multiple layers) and Deduplication option. Please check and let me know what you think. If we...

> Have you checked how the "for loops" in the forward affect downstream training duration? k-means training should still be separated for every layer right? I didn't do the analysis....

The problem with peft is when we want to load the model from the speechbrain checkpoint. It is a mess to make it work and also it could cause the...

I totally agree.. what we have done for LLama2 is the QLora(Qunatization+Lora) and only applicable for LLama2. I think with this trend of public LLM released so often, and all...

We have uploaded the models with 1000/2000 clusters for different layers in our own repo.. We plan to move all the trained kmeans to the speech \brain repo once the...

@TParcollet I am also a bit busy with the Neurips deadline.(June 5th) . but after that, I could actively work on that

Hi @anupsingh15 Different folders are related to the ablation study we did for effect of number of training data for kmenas.: 1. LibriSpeech-100-360-500 : only using

> @poonehmousavi could you review and test the code as mentioned? It looks ready to me. Thanks! Sure. I will do it by tomorrow.