tumurzakov

Results 46 comments of tumurzakov

1. **After restart statuses connection become Ok** (Closing issue, thanx :) 2. `drbdadm status` after satellite restart ``` pvc-1ed58414-0a3d-415d-8b20-80d4fae9cf71 role:Primary disk:UpToDate kube-node-1 role:Secondary peer-disk:UpToDate kube-node-6 role:Secondary peer-disk:UpToDate ``` `drbdadm status`...

``` root@kube-node-2# drbdadm --version DRBDADM_BUILDTAG=GIT-hash:\ fa9b9d3823b6e1792919e711fcf6164cac629290\ build\ by\ buildd@lgw01-amd64-011\,\ 2020-11-05\ 11:51:01 DRBDADM_API_VERSION=2 DRBD_KERNEL_VERSION_CODE=0x090019 DRBD_KERNEL_VERSION=9.0.25 DRBDADM_VERSION_CODE=0x090f01 DRBDADM_VERSION=9.15.1 ``` ``` root@kube-node-6# drbdadm --version DRBDADM_BUILDTAG=GIT-hash:\ fa9b9d3823b6e1792919e711fcf6164cac629290\ build\ by\ buildd@lgw01-amd64-011\,\ 2020-11-05\ 11:51:01 DRBDADM_API_VERSION=2 DRBD_KERNEL_VERSION_CODE=0x090019...

1. kube-node-7 down 2. `linstor n lost kube-node-7` 3. kube-node-7 up 4. `linstor node create kube-node-7 192.168.11.246` 5. `linstor sp create lvmthin kube-node-7 linstor-pool ubuntu-vg/linstor-pool` 6. After that command statuses...

I think that error occured because i made unexpected actions: Good scenario 1. Host is down 2. Wait, because nothing happen, when host is up, replication continue 3. Host is...

No, ``` # kubectl -n kube-system get pods -o wide|grep coredns coredns-758cc77499-crbmv 1/1 Running 0 22d 10.244.0.4 kube-master-3 coredns-758cc77499-z2flm 1/1 Running 0 40d 10.244.7.16 kube-master-2 ``` I have 3 master...

From name of parameter `motion_bucket_id` i can guess that they sort videos in dataset by motion magnitude. So every motion_bucket_id refers to some kind of motion. Somewhere in 1..300? there...

Hello Everything depends on your dataset. Read how opensora and stability ai prepare their datasets: https://github.com/hpcaitech/Open-Sora/blob/main/docs/report_01.md https://github.com/hpcaitech/Open-Sora/blob/main/docs/report_02.md https://github.com/hpcaitech/Open-Sora/blob/main/docs/report_03.md https://arxiv.org/pdf/2311.15127 carefully read sections about dataset preparation, it is multistep process You...

You must go step by step. First train one motion in lora. I trained 96 frames for 512x288 resolution. It took near 24GB. If you will get quite good results,...

Now i training lora 1024x576x3 and it tooks 23.8 GB on my 3090. 1. memory offload everything that don't needed for train (vae, text_encoder) 2. precache samples (encode latents and...

Yes, i'm using checkpointing too