Michael Murray
Michael Murray
The goal was compiling directly on the M1. And yes I eventually found your old discussion and have succeeded. When will this become more official?
Okay. You can't lose this ticket.
Oh one last thing. Is integration with tensorflow-metal on the road map either?
Yea it can't be included, but it would be great if I could load the pluggable device. You can close the ticket.
Experiencing this with 3.11.2.
@Muskan19577 I had to move back to python 3.9
> fairseq-hydra-train \ > --config-dir ./conf/finetune/ --config-name base_lrs3_30h.yaml \ > task.data=${data_dir} task.label_dir=${data_dir} \ > task.tokenizer_bpe_model=/gs/hs0/tga-tslab/bowen/Dataset/LRS3/spm1000/spm_unigram1000.model \ > model.w2v_path=/gs/hs0/tga-tslab/bowen/av_hubert/pretrain_model/base_lrs3_iter5.pt \ > hydra.run.dir=./exps/finetune/dist common.user_dir=`pwd` \ > distributed_training.distributed_world_size=${world_size} \ > distributed_training.nprocs_per_node=${nproc_per_node} \ >...
I will try and build it and contribute it back. Thank you for providing aarch anyway that one was very useful.
I've deployed an arm64 VM and installed the relevant tools, I should be able to supply the lib back to the project this week. @gpu what version of the JDK...
@gpu @blueberry Its a fresh VM, I didn't want to have any chance of contamination for a library that other people may rely on. I can install any jdk version...