minuenergy
minuenergy
I want to finetune head 2 times which is trained in Carla dataset(my custom data), and train nyu-kitti (train_mix.py) I use this command [python train_mix.py -m zoedepth_nk --pretrained_resource="local::Carla/ZoeDepthv1_13-Jul_04-44-e6e03405a1f8_best.pt"] but I...
train_file: 'datasets/annotations_all/msrvtt_caption/train.jsonl' test_file: 'datasets/annotations_all/msrvtt_caption/test.jsonl' video_root: "datasets/MSRVTT/data/MSRVTT/videos/all" i get YouTubeClips, AllVideoDescriptions.txt at https://www.cs.utexas.edu/users/ml/clamp/videoDescription/ (Videos, annotation) and get splits from https://github.com/albanie/collaborative-experts/blob/master/misc/datasets/msvd/README.md
 could you provide evaluation code please? thank you
In Detection & Segmentation, for example, provided instances_minitrain2017.json "000000527649.jpg" is exist [Json](https://drive.google.com/open?id=1lezhgY4M_Ag13w0dEzQ7x_zQ_w0ohjin) but not for train2017/ which i downloaded at huggingface [coco_minitrain_25k.zip [Huggingface]](https://huggingface.co/datasets/bryanbocao/coco_minitrain/blob/main/coco_minitrain_25k.zip) (maintained by @[bryanbocao](https://github.com/bryanbocao))
I need this project pipeline and i couldn't download whole models weight in this repo. could you upload please?
``` const fs = require('fs'); function readCSV(filePath) { const content = fs.readFileSync(filePath, 'utf-8'); const lines = content.trim().split('\n'); const headers = lines[0].split(',').map(h => h.trim()); const rows = lines.slice(1).map(line => { const...