[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9202aafa-f30e-4d96-72a9-3ccd083cc58c@redhat.com>
Date: Thu, 3 Dec 2020 17:13:15 +0100
From: David Hildenbrand <david@...hat.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>,
linux-hyperv@...r.kernel.org
Cc: Wei Liu <wei.liu@...nel.org>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Michael Kelley <mikelley@...rosoft.com>,
Dexuan Cui <decui@...rosoft.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH 2/2] hv_balloon: do adjust_managed_page_count() when
ballooning/un-ballooning
On 02.12.20 17:12, Vitaly Kuznetsov wrote:
> Unlike virtio_balloon/virtio_mem/xen balloon drivers, Hyper-V balloon driver
> does not adjust managed pages count when ballooning/un-ballooning and this leads
> to incorrect stats being reported, e.g. unexpected 'free' output.
>
> Note, the calculation in post_status() seems to remain correct: ballooned out
> pages are never 'available' and we manually add dm->num_pages_ballooned to
> 'commited'.
>
> Suggested-by: David Hildenbrand <david@...hat.com>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
> drivers/hv/hv_balloon.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
> index da3b6bd2367c..8c471823a5af 100644
> --- a/drivers/hv/hv_balloon.c
> +++ b/drivers/hv/hv_balloon.c
> @@ -1198,6 +1198,7 @@ static void free_balloon_pages(struct hv_dynmem_device *dm,
> __ClearPageOffline(pg);
> __free_page(pg);
> dm->num_pages_ballooned--;
> + adjust_managed_page_count(pg, 1);
> }
> }
>
> @@ -1238,8 +1239,10 @@ static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm,
> split_page(pg, get_order(alloc_unit << PAGE_SHIFT));
>
> /* mark all pages offline */
> - for (j = 0; j < alloc_unit; j++)
> + for (j = 0; j < alloc_unit; j++) {
> __SetPageOffline(pg + j);
> + adjust_managed_page_count(pg + j, -1);
> + }
>
> bl_resp->range_count++;
> bl_resp->range_array[i].finfo.start_page =
>
I assume this has been properly tested such that it does not change the
system behavior regarding when/how HyperV decides to add/remove memory.
LGTM
Reviewed-by: David Hildenbrand <david@...hat.com>
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists