Failure on Windows 10 v2004 and v2009 under HyperV with dynamic memory
TLDR: The RC2 version of WinPMem v4 fails to run reliably on Windows 10 v2004 and v2009 (20H2) when in a HyperV guest with the "dynamic memory" option enabled. This option is unfortunately a HyperV default, but may be disabled. The failure occurs when the memory of the guest has been expanded at some point in the past due to pressure on memory within the guest. Prior to such memory expansion the memory dump operates as expected.
Conditions tested:
- Execution under Windows 10 v2004 and v2009.
- Operating system under HyperV with the "dynamic memory" option enabled.
- Operating system under HyperV with the "dynamic memory" option disabled.
- Operating system under VMware v16.1.
- RAM allocations to virtual machines: 2GB, 4GB.
- Processor allocations to virtual machines: 2, 4.
- Memory pressure applied by running multiple instances of VisualStudio to consume memory prior to test.
- Target output modes tested: both stdout and file.
- All methods of memory acquisition (-0, -1, -2).
Conditions NOT tested:
- Execution under any other Windows OS version.
Failure condition noticed at:
- Windows 10 v2004 and v2009
- Execution under HyperV with "dynamic memory" enabled.
- Failure is independent of the number of cores and the RAM allocation.
- Failure does NOT occur on VMware guests.
- Failure does not occur when "dynamic memory" is disabled.
Symptoms:
- Immediate error code received: -1 or -8.
- Failure message: "Failed to get memory geometry: The program issued a command but the command length is incorrect"
Notes:
- Failure does not occur unless memory pressure has been applied to the guest at some point in the past since the guest has started.
Do we see any logs from the kernel? Can you please install debug viewer and capture kernel messages to help debug?
Maybe in this configuration, the guest has many many physical memory ranges and they dont fix in the IOCTRL we sent to the kernel?
Thanks for the reply. I will allocate some time to follow-up on the information request and revert back.
As requested: HyperV configuration: 2 Cores, 2GB Memory (dynamic memory option enabled)
For reference a similar output is attached for the case when "dynamic memory" is not used. There is no failure in this situation, everything works as expected.
Guys, I wrote about that. https://github.com/Velocidex/WinPmem/issues/10
Please uncheck the "Dynamic Memory" by setting your memory to a static size (and please tell if it removed the problem successfully). It's most likely not a Winpmem issue. After a while, the memory ranges become heavily fragmented. You can check that with the Sysinternals Rammap tool from Microsoft. MS just don't seem to fix it ever.
(If you, by chance, get a nice view of these fragmented memory ranges in the rammap tool, with ~1000-10000 memory range fragments, consider dropping me a screenshot. I would use it to report on the Windows Driver Docs issue tracker to increase the chances of official bug fixing.)
Can we just increase the size of the buffer we send from user space to accommodate all the ranges?
On Wed, Jan 27, 2021, 8:58 PM Viviane [email protected] wrote:
Guys, I wrote about that. #10 https://github.com/Velocidex/WinPmem/issues/10
Please uncheck the "Dynamic Memory" by setting your memory to a static size (and please tell if it removed the problem successfully). It's most likely not a Winpmem issue. After a while, the memory ranges become heavily fragmented. You can check that with the Sysinternals Rammap tool from Microsoft. MS just don't seem to fix it ever.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Velocidex/WinPmem/issues/25#issuecomment-768205573, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA5NRIUU6XD5YYXCJ4WYZQDS37WTVANCNFSM4WRNMZDA .
@scudette investigating any such solution would be appreciated from our side, since we have to run WinPMem on some machines where changing the dynamic memory option for HyperV is not available to us.
That was one idea I had in the past, but it's simply too much, I saw 8000 memory ranges one time!
Here is one:

If it were only 100 or even 500, okay. But how far can this go? It could go over 10000 memory ranges to transfer. I mean, with fast I/O, sure, it's possible. But we do have static-sized Device I/O packets right now, that's also the reason why it fails (at least it fails in a controlled way...).
OK, I'm right now reporting it (yes, right now), it would really be good if it was fixed.
Hi guys, I've tried multiple approaches using trial & error, and I've now been given an email to report the kernel bug. :-) ~~There's hope for this to be fixed.~~
I'm sorry to say but they don't seem eager to fix this particular bug. Please disable dynamic memory! There is no way to control this bug, the fragmentation can potentially spread in an unlimited way.