lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <30d66f75-60c8-4ebf-8451-839806400dd4@redhat.com>
Date: Fri, 26 Apr 2024 10:58:56 +0200
From: David Hildenbrand <david@...hat.com>
To: mhklinux@...look.com, haiyangz@...rosoft.com, wei.liu@...nel.org,
 decui@...rosoft.com, linux-kernel@...r.kernel.org,
 linux-hyperv@...r.kernel.org
Subject: Re: [PATCH 1/1] hv_balloon: Enable hot-add for memblock sizes > 128
 Mbytes

On 11.03.24 19:12, mhkelley58@...il.com wrote:
> From: Michael Kelley <mhklinux@...look.com>
> 
> The Hyper-V balloon driver supports hot-add of memory in addition
> to ballooning. Current code hot-adds in fixed size chunks of
> 128 Mbytes (fixed constant HA_CHUNK in the code).  While this works
> in Hyper-V VMs with 64 Gbytes or less or memory where the Linux
> memblock size is 128 Mbytes, the hot-add fails for larger memblock
> sizes because add_memory() expects memory to be added in chunks
> that match the memblock size. Messages like the following are
> reported when Linux has a 256 Mbyte memblock size:
> 
> [  312.668859] Block size [0x10000000] unaligned hotplug range:
>                 start 0x310000000, size 0x8000000
> [  312.668880] hv_balloon: hot_add memory failed error is -22
> [  312.668984] hv_balloon: Memory hot add failed
> 
> Larger memblock sizes are usually used in VMs with more than
> 64 Gbytes of memory, depending on the alignment of the VM's
> physical address space.

Right, that's the case since 2018.


> 
> Fix this problem by having the Hyper-V balloon driver determine
> the Linux memblock size, and process hot-add requests in that
> chunk size instead of a fixed 128 Mbytes. Also update the hot-add
> alignment requested of the Hyper-V host to match the memblock
> size instead of being a fixed 128 Mbytes.

That way, we should never be getting unaligned ranges IIRC, correct? I 
think we added ways in QEMU to guarantee that for the HV-balloon 
implementation as well.

> 
> The code changes look significant, but in fact are just a

Nah, it's okay :)

> simple text substitution of a new global variable for the
> previous HA_CHUNK constant. No algorithms are changed except
> to initialize the new global variable and to calculate the
> alignment value to pass to Hyper-V. Testing with memblock
> sizes of 256 Mbytes and 2 Gbytes shows correct operation.
> 
> Signed-off-by: Michael Kelley <mhklinux@...look.com>
> ---
>   drivers/hv/hv_balloon.c | 64 ++++++++++++++++++++++++-----------------
>   1 file changed, 37 insertions(+), 27 deletions(-)
> 
> diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
> index e000fa3b9f97..d3bfbf3d274a 100644
> --- a/drivers/hv/hv_balloon.c
> +++ b/drivers/hv/hv_balloon.c
> @@ -425,11 +425,11 @@ struct dm_info_msg {
>    * The range start_pfn : end_pfn specifies the range
>    * that the host has asked us to hot add. The range
>    * start_pfn : ha_end_pfn specifies the range that we have
> - * currently hot added. We hot add in multiples of 128M
> - * chunks; it is possible that we may not be able to bring
> - * online all the pages in the region. The range
> + * currently hot added. We hot add in chunks equal to the
> + * memory block size; it is possible that we may not be able
> + * to bring online all the pages in the region. The range
>    * covered_start_pfn:covered_end_pfn defines the pages that can
> - * be brough online.
> + * be brought online.
>    */
>   
>   struct hv_hotadd_state {
> @@ -505,8 +505,9 @@ enum hv_dm_state {
>   
>   static __u8 recv_buffer[HV_HYP_PAGE_SIZE];
>   static __u8 balloon_up_send_buffer[HV_HYP_PAGE_SIZE];
> +static unsigned long ha_chunk_pgs;

Why not stick to PAGES_IN_2M and call this

ha_pages_in_chunk? Much easier to get than "pgs".

Apart from that looks good. Some helper macros to convert size to chunks 
etc. might make the code even more readable.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ