lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ef6f8554-8324-a4d8-4549-759495e482b7@redhat.com>
Date:   Fri, 13 Sep 2019 23:44:20 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Dexuan Cui <decui@...rosoft.com>,
        KY Srinivasan <kys@...rosoft.com>,
        Haiyang Zhang <haiyangz@...rosoft.com>,
        Stephen Hemminger <sthemmin@...rosoft.com>,
        "sashal@...nel.org" <sashal@...nel.org>,
        "linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Michael Kelley <mikelley@...rosoft.com>
Subject: Re: [PATCH] hv_balloon: Add the support of hibernation

On 13.09.19 22:54, Dexuan Cui wrote:
>> From: David Hildenbrand <david@...hat.com>
>> Sent: Friday, September 13, 2019 12:46 AM
>>
>> On 12.09.19 21:18, Dexuan Cui wrote:
>>> 3. Hibernation can be especially useful when we pass through a PCIe device,
>>> e.g. a NIC, a NVMe controller or a GPU, to the VM, as usually save/restore
>>> and live migration can not work with this kind of configuration, because
>>> usually the host doesn't know how to save/restore the state of the PCIe
>>> device.
>>
>> Interesting. Under QEMU/KVM (especially for migration), the discussed
>> solutions I am aware of rather wanted to temporarily unplug the PCI
>> devices or replace them with some kind of "standby" device temporarily.
> 
> For the complex devices like a modern GPU, there may not be an 
> equivalent "standby" software-emulated device for it, and unplugging the
> PCI device temporarily is not good, as it may not be transparent to the
> userspace applications. Hibernation here is especially useful, e.g. to Virtual
> Desktop Infrastructure users whose VMs can own physical GPUs, because
> all the userspace applications are frozen when the VM is hibernated, and
> when the VM resumes back, the applications are automatically resumed 
> and continue to run seamlessly, at least in theory. A hibernated VM saves
> compute resources and cost for the users.

Yes, I can see how GPUs might be problematic, especially for desktop
infrastructures (and maybe especially when running specific guest
operating systems :) ). Thanks for the explanation.

[...]

> On recent Windows Server 2019+ hosts, the toolstacks on the hosts
> guarantees that Dynamic Memory and Memory Resizing can not be enabled
> if the virtual ACPI S4 state is enabled, and vice versa. Please refer to the
> long write-up I made here: https://lkml.org/lkml/2019/9/5/1160 .

Hah, so the patch here is not actually relevant for modern Hyper-V
installations. (I would have loved to read that in the patch description
- but maybe I missed that)

> 
> And, to make the hibernation functionality automated, the host is able to
> send a "please hibernate" message to the VM via the Hyper-V shutdown
> device upon the user's request (e.g. via GUI or scripting): see 
> https://lkml.org/lkml/2019/9/13/811 . When the host sends the message,
> it checks if the virtual ACPI S4 state is enabled for the VM: if not, the host
> refuses to send the message. This means that the user does want to make
> sure the virtual ACPI S4 state is enabled for the VM, if the user of the VM
> wants to use the hibernation feature, and this means Dynamic Memory
> and Memory Resizing can not be active due to the restrictions from the 
> host toolstack.

Okay, *but* this is a current limitation. Just saying. If you could at
least support balloon inflate/deflate, that would be a clear win for
users. And less configuration knobs.

> 
> And the hibernation functionality won't be officially supported on old
> Windows Server hosts.
> 
> So, IMHO we can't be bother to implement the idea you described in
> detail. Sorry. :-)

No worries, I neither develop for, use or work with Hyper-V. I was just
reading along and wondering why you basically make the hv_balloon
unusable in these environments. (initially I thought, "why don't you
just disallow probing the device completely")

I am aware of the (hypervisor) issues of hibernation/suspend when it
comes to balloon drivers / memory hot(un)plug. (currently working on
virtio-mem myself and initially decided to block any
hibernation/suspension attempts in case the driver is loaded and memory
was plugged/unplugged)

> 
> And, while I agree your idea is good, technically speaking I suspect it may
> not be really useful, because once hv_balloon allows balloon-up/down,
> hv_balloon effectively loses control of memory pages: after the host
> takes some memory away, the VM never knows when exactly the
> host will give it back -- actually the host never guarantees how soon
> it will give the memory back. Consequently, the VM almost immediately
> ends up in an un-hibernatable state...
If you go via the host, you might be able to make sure to request to
deflate the balloon before you try to hibernate, and inflate again when
back up. You might even ask the user for permissions. Of course, once
you deflated the balloon, it might not be guaranteed to inflate the
balloon to the original size. But after all, it's "dynamic memory", so
it might even be what the name suggests. It could be very well
controlled from the host.

If you go via the guest, you would first have to tell your hypervisor
"please allow me to deflate so I can hibernate", or something like that.
After hibernation (or some time X), the host might then decide to
inflate again.

E.g., take a look at virtio-balloon. When suspending, it simply deflates
(without asking ...), to inflate again when resuming. Not saying that's
the best approach (it's not :) ), but one approach to at least make it work.

Anyhow, just some comments from my side :) I can see how Windows Server
worked around that issue right now by just XOR'ing both features.

> 
> Thanks,
> -- Dexuan
> 


-- 

Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ