cml icon indicating copy to clipboard operation
cml copied to clipboard

No space left on device creates hung EC2 instance

Open evamaxfield opened this issue 3 years ago • 16 comments

Hello!

First I just want to say thank you for this library, it is truly incredible what I have been able to spin up in such a short timeframe. :bow: :bow:

Onto the error: I was attempting to train a model with a bit more data than my original go and ran into a System.IO.IOException: No space left on device. I should have expected this but did not. With my prior test runs I saw that correctly after error or success the EC2 instance was shutdown, but for this one it was not. The associated EC2 instance stayed running until I manually went and terminated it.

My personal desire would be to have it terminate on any error, including Sys but this one may be tricky to handle so I understand and it may just be that some documentation should be added as to what to all cleanup manually.

Full log here: https://github.com/JacksonMaxfield/phd-infrastructures/actions/runs/2321863887

Thank you again!

evamaxfield avatar May 13 '22 21:05 evamaxfield

Taking a look. 👀

On Fri, May 13, 2022, 14:57 Jackson Maxfield Brown @.***> wrote:

Hello!

First I just want to say thank you for this library, it is truly incredible what I have been able to spin up in such a short timeframe. 🙇 🙇

Onto the error: I was attempting to train a model with a bit more data than my original go and ran into a System.IO.IOException: No space left on device. I should have expected this but did not. With my prior test runs I saw that correctly after error or success the EC2 instance was shutdown, but for this one it was not. The associated EC2 instance stayed running until I manually went and terminated it.

My personal desire would be to have it terminate on any error, including Sys but this one may be tricky to handle so I understand and it may just be that some documentation should be added as to what to all cleanup manually.

Full log here: https://github.com/JacksonMaxfield/phd-infrastructures/actions/runs/2321863887

Thank you again!

— Reply to this email directly, view it on GitHub https://github.com/iterative/cml/issues/1006, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAIN7M45NTOPLZEH5UH6EVLVJ3F47ANCNFSM5V4SVUHQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>

dacbd avatar May 13 '22 22:05 dacbd

@JacksonMaxfield I have anecdotally noticed that AWS instances like to behave expectedly when the actions runner becomes exhausted of memory as you have done in your linked example.

Can you provide the output of the following in a gist or Pastebin from the crashed instance?

  • journalctl -n all -u cml.service --no-pager
  • sudo dmesg --ctime
  • sudo dmesg --citime --userspace

dacbd avatar May 13 '22 22:05 dacbd

I unfortunately cannot. I terminated the instance, it's attached volumnes, and etc. already.

evamaxfield avatar May 13 '22 22:05 evamaxfield

My personal desire would be to have it terminate on any error, including Sys but this one may be tricky to handle so I understand and it may just be that some documentation should be added as to what to all cleanup manually.

That is the intention, I believe that there may be a niceness issue that we can probably fix, and I will try to investigate.

I suspect the oom crash is also causing our clean process to not execute, and thus the instance remains.

dacbd avatar May 13 '22 22:05 dacbd

I unfortunately cannot. I terminated the instance, it's attached volumnes, and etc. already.

No worries, if it happens again those commands are helpful for us to diagnose. the issue. Consider including --cloud-startup-script=$(echo 'echo "$(curl https://github.com/'"$GITHUB_ACTOR"'.keys)" >> /home/ubuntu/.ssh/authorized_keys' | base64 -w 0) for easy access to the instance for debugging.

dacbd avatar May 13 '22 22:05 dacbd

@JacksonMaxfield, you may also want to use cml runner --cloud-hdd-size=<sumber> where <number> is a custom storage size in gigabytes.

0x2b3bfa0 avatar May 13 '22 22:05 0x2b3bfa0

Yep! I am already using that. I just bumped it up.

evamaxfield avatar May 13 '22 22:05 evamaxfield

@JacksonMaxfield it's looking pretty successful! / If you wanted to say run it on a smaller instance and yank those logs? ❤️ but no worries otherwise. I'm pretty certain about why it failed to self-terminate.

dacbd avatar May 14 '22 00:05 dacbd

I can do that I may just need a bit of step by step instructions.

If I am understanding correctly, you want me to add the option:

--cloud-startup-script=$(echo 'echo "$(curl https://github.com/'"$GITHUB_ACTOR"'.keys)" >> /home/ubuntu/.ssh/authorized_keys' | base64 -w 0)

here

But then what do I do after that? Where does that key go?

When and where should I run these commands?

  • journalctl -n all -u cml.service --no-pager
  • sudo dmesg --ctime
  • sudo dmesg --citime --userspace

Apologies for my naivety

evamaxfield avatar May 14 '22 00:05 evamaxfield

I am heading out for the weekend, can take a look next week!

evamaxfield avatar May 14 '22 00:05 evamaxfield

Correct, that will and your ssh keys to the default ubuntu user so you can connect to the instance with ssh. After the action fails from the server running out of memory can run:

ssh ubuntu@instance_ip
sudo journalctl -n all -u cml.service --no-pager > cml.log
sudo dmesg --ctime > system.log
sudo dmesg --ctime --userspace > userspace.log

then from your computer you can copy them:

scp ubuntu@instance_ip:~/cml.log .
scp ubuntu@instance_ip:~/system.log .
scp ubuntu@instance_ip:~/userspace.log .

There is a chance that the server could be really broke if the ssh command hangs, if that happens reboot it from the web consle and try the commands again.

dacbd avatar May 14 '22 03:05 dacbd

Thanks for the step-by-step @dacbd, running a new training job with a storage size that should fail: https://github.com/evamaxfield/phd-infrastructures/actions/runs/2333920369

Sidenote, maybe I just didn't see it in the documentation but it may be good to have a spot in the documentation listing all the things to cleanup manually if the instance never terminates? From last time I did it I think the things I had to terminate / delete were:

  • The EC2 instance
    • I think the attached volume terminated itself when I terminated the instance but if not, then the volume should be terminated too
  • The created cml-iterative security group
  • The created key-pair?

I can double check this list after I cleanup the resources from the planned failed instance that is currently setting itself up :joy:

evamaxfield avatar May 16 '22 17:05 evamaxfield

Sure, we always welcome contributions, if you felt some of the documentation to be hard to understand as a new user please do let us know / feedback is always welcome.

For cml runner there are some limitations to what it can self-clean up, IIRC the security group is one of them (it provides the VPC assignment) you can overcome this by providing a premade one with --aws-security-group

dacbd avatar May 16 '22 18:05 dacbd

Finally got it! Here you go!

cml.log system.log userspace.log

evamaxfield avatar May 16 '22 20:05 evamaxfield

Thanks, just the cml log was enough in this case.

dacbd avatar May 17 '22 04:05 dacbd

https://github.com/community/community/discussions/30440

dacbd avatar Aug 22 '22 18:08 dacbd

https://github.com/iterative/cml/pull/1225

dacbd avatar Oct 17 '22 15:10 dacbd