Persistent Checkpointing PR (#2184)
This is the first version of a PR that attempts to provide the functionality requested by issue #2184
-
rr create-checkpoints -i <some_interval> [-s <some_start_event>, -e <some_end_event>]where last two params are optional. -
rr replay -g <evt>spawns the session from the most recent PCP before<evt>. It also spaws the most recent PCP, if-f <pid>is used, i.e. it finds when<pid>is created and spawns first PCP before that. -
rr replayuses PCP during reverse execution -
rr rerunuses PCP as well
Both the replay and rerun command now takes --ignore-pcp to ignore any PCP's and I've made spawning from PCP the default behavior of both commands.
2 commands has also been added to the spawned GDB; write-checkpoints and load-checkpoints.
The last point about persistent checkpoints being created at record time is not provided by this PR, but I'm willing to attempt to add that in a future PR, now that I have a little insight into how this would/could/maybe should work.
At this time, little to no optimizations are performed. Each mapping in the process address space is serialized to disk and it is currently not compressed in any way. Compressing data that goes into anonymous mappings should be fairly simple to implement as this data will get copied into memory during restore of a PCP, while file backed mappings (like executable data for instance) can not be compressed as easily. One wants to map as much file backed as possible, as this is not necessarily committed to physical memory immediately, which is the case with copying data into mappings.
Other optimizations that possibly could be done, is to instead of creating each checkpoints "from scratch", is to during restore of PCP's, reconstitute the first one (at event N), then when reconstituting the following checkpoint, fork the first and make the changes required to that one. As it stands right now, it creates a new session for each checkpoint. Theoretically this consumes more memory. Forking checkpoint N+1 from N and changing the address space where needed, I think would mean that less memory is used, I think.
Also, if anybody has any ideas on how one could possibly write tests for something like this, they would be most welcome to share those thoughts with me.
This has been languishing for a while due to it's sheer size. To move this forward, I propose we just review the parts of it that integrate with the rest of rr to make sure it won't regress anything, and then take it. We can treat the checkpoint format as a work-in-progress that we can update at any time, and figure out the testing situation as we go. Does that sound reasonable @rocallahan?
Sure
This has been languishing for a while due to it's sheer size. To move this forward, I propose we just review the parts of it that integrate with the rest of rr to make sure it won't regress anything, and then take it. We can treat the checkpoint format as a work-in-progress that we can update at any time, and figure out the testing situation as we go. Does that sound reasonable @rocallahan?
I will get to the minor changes you requested previously, as soon as possible. I'm also open to any input about literally anything. Design is not my strongest suit, which will probably become apparent when looking at the code. And though "it seems to work" is not exactly a great metric (it's terrible, maybe?) I have used this feature quite a bit on my own and haven't noticed any issues so far.
Most of what I wrote, was written in a way to not change the original RR code (except the very few parts that required it, like moving internal class declarations of structs). This probably means that there exists code duplication, and some "patch work"/hacks, if that makes any sense.
I'm still at a loss for how one would test this, the best idea I've came up with so far (haven't implemented it though) is to write python scripts that can drive GDB, and do things like set checkpoint, serialize, kill process, deserialize, verify process space, etc. That would be quite a "chunky" end-to-end test though, but it's the only idea I can think of. But it's not just tracee-land that needs to be in a good state, supervisor-land also needs to be in an identical state. So off the top of my head I can think of right now, some of the things to consider are;
Tracee
- Memory mappings are all in place
- Memory contents are identical (after deserialization)
Supervisor (the "rr process")
- Memory mapping meta data is identical
- Frame-time meta data is identical, like ticks, event count etc
- Last signal seen
- Task start events
- VM start events
- file table stuff and many more things
Some of these things are helped out by the fact that RR asserts if it finds itself in an unexpected state.
If you have time, this would be a good time to rebase this PR as we just shipped an rr release and don't have anything substantial planned for a while.
If you have time, this would be a good time to rebase this PR as we just shipped an rr release and don't have anything substantial planned for a while.
I'm starting to work on that now. I'm not exactly superb at git, but I've got some help with it, so hopefully I'll get some results here in the week of making this PR build & work rebased on this new release.
Friendly ping @theIDinside - faster replay will definitely be useful :-)
Thanks so much for continuing to work on this. @rocallahan and I took a look at this this weekend. Rather than do a bunch of nitpicking at this point we have a few high level questions/comments
- This supports checkpointing at arbitrary marks. How much simpler would things be if we only supported checkpointing at event boundaries? We suspect what you've done is the right thing but we're curious how much you think things could be simplified if we only allowed checkpoints at events.
- Do you anticipate any barriers to making the checkpoint format stable at some point in the future? Obviously we'd want it to be tested much more before we committed to a checkpoint format, but at some point stability of the checkpoint format would be valuable to other tools (e.g. Pernosco).
- Why is the on-disk format structured so that there's a global index of metadata and separate files for some but not all checkpoint data as opposed to having a single self-contained file for each checkpoint and no global index?
- Can we get at least a basic test that the checkpoint feature works? We're not expecting exhaustive tests at this stage but more than zero would be nice?
- The current commit structure seems to be of little value. Are we wrong? If not, perhaps organize the PR as follows: FileMonitor changes separately in one commit, UI commands including both rr and gdb commands separately in commits as necessary, separate tests, otherwise squash all the commits.
Our initial read through the patch yielded a lot of relatively minor comments. The general approach seems sound, modulo perhaps the checkpoint data format question above.
- This supports checkpointing at arbitrary marks. How much simpler would things be if we only supported checkpointing at event boundaries? We suspect what you've done is the right thing but we're curious how much you think things could be simplified if we only allowed checkpoints at events.
I think I need more clarification for this question. When rr create-checkpoints executes, we basically just move forward in time and as soon as we've crossed some interval and reached an event boundary where we can set a checkpoint we do so. Or do you mean, removing the ability to make "gdb checkpoints" persistent; i.e. not keeping around the mark-with-clone and the mark-without-clone, which we then seek to at restore? This would indeed simplify things and it would also make any future format cleaner.
- Do you anticipate any barriers to making the checkpoint format stable at some point in the future? Obviously we'd want it to be tested much more before we committed to a checkpoint format, but at some point stability of the checkpoint format would be valuable to other tools (e.g. Pernosco).
I think the format can be fairly stabilized. But even if it couldn't - could this not be solved via some indirection in the format, or is this impossible? For instance, with the current PR as a reference, it would mean that CheckpointInfo could be the format that's exposed to external tools, whereas the internal representation of the serialized tracee data is described by CloneCompletionInfo, so in a way having a secondary format for the actual tracee process space data, that's consumed by the "persistent checkpoint" format, if that makes any sense? I'm not sure if that's doable or if that would be considered a stable checkpoint format?
In the future, we probably (maybe) don't want to dump the entire process to disk, every time we serialize. We would need some way to represent this incremental state, something that probably is not as interesting to external tools and I'm guessing it would break the format too.
- Why is the on-disk format structured so that there's a global index of metadata and separate files for some but not all checkpoint data as opposed to having a single self-contained file for each checkpoint and no global index?
I have no good explanation here :stuck_out_tongue:. The format needs to describe 2 things, ultimately
- supervisor state (like statistics, ticks, if we're using syscall buffering etc etc etc)
- actual tracee data & metadata (memory mappings, the contents)
I'll simplify this so that it doesn't get spread out across different files. If I recall it had something to do with being able to remove individual checkpoints more easily. Or maybe I modelled it such that it would be simpler for me to manage checkpoints from the perspective of GDB, I can't remember really.
- Can we get at least a basic test that the checkpoint feature works? We're not expecting exhaustive tests at this stage but more than zero would be nice?
This is a tricky one and one I've thought about a lot. I keep coming back to having GDB be driven by a Python script, which does the following:
- Decides a range of events where it wants to "test the world", so events
N .. M - Connects to a running RR instance
- Set checkpoint at N.
- "Record the world" (memory mappings, contents of them, everything)
- Continue to M
- Delete checkpoint
- Load checkpoint from disk, repeat 3-5
- Compare outputs from step 4
Since RR is deterministic with it's replays, comparing the output should be "trivial", or am I being too heavy-handed here? Would something like that be considered acceptable?
- The current commit structure seems to be of little value. Are we wrong? If not, perhaps organize the PR as follows: FileMonitor changes separately in one commit, UI commands including both rr and gdb commands separately in commits as necessary, separate tests, otherwise squash all the commits.
I'll re-arrange it, like suggested.
I think I need more clarification for this question. When
rr create-checkpointsexecutes, we basically just move forward in time and as soon as we've crossed some interval and reached an event boundary where we can set a checkpoint we do so. Or do you mean, removing the ability to make "gdb checkpoints" persistent; i.e. not keeping around the mark-with-clone and the mark-without-clone, which we then seek to at restore? This would indeed simplify things and it would also make any future format cleaner.
The latter.
TBH I think it's probably worth having the ability to create checkpoints at arbitrary moments in time, since the amount of work executed between events can be arbitrarily large. But there is a tradeoff here which we wanted to think about.
For instance, with the current PR as a reference, it would mean that
CheckpointInfocould be the format that's exposed to external tools, whereas the internal representation of the serialized tracee data is described byCloneCompletionInfo, so in a way having a secondary format for the actual tracee process space data, that's consumed by the "persistent checkpoint" format, if that makes any sense? I'm not sure if that's doable or if that would be considered a stable checkpoint format?
I'm not sure what you mean here. "Stable checkpoint format" would mean that persistent checkpoints created by rr version X can be restored by any rr version >= X.
That would be good, although I suppose it's not as important as just being able to replay the trace, since you can always restore a checkpoint very slowly by just replaying to the right point.
I'll simplify this so that it doesn't get spread out across different files. If I recall it had something to do with being able to remove individual checkpoints more easily. Or maybe I modelled it such that it would be simpler for me to manage checkpoints from the perspective of GDB, I can't remember really.
To be clear, one file per checkpoint seems like a good idea because then it's easy to add and remove checkpoints efficiently. The question is whether that index file is a good idea. One issue with it: what happens if someone tries to concurrently create checkpoints, or adds a checkpoint at the same time as replaying?
Since RR is deterministic with it's replays, comparing the output should be "trivial", or am I being too heavy-handed here? Would something like that be considered acceptable?
I think for now all we really need is a debug script that starts a program, runs it to some point, creates a checkpoint, and exits, plus another debug script that restores the checkpoint and tests that some simple state is as it should be. Doesn't need to be very comprehensive or general. Over time we'll have to add more tests like this that test different parts of the state and we might want some utility functions in util.sh to help with that, but I don't think we need that just yet.