[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <13524c28-dfec-dd21-8a45-216b161deb72@redhat.com>
Date: Thu, 3 Dec 2020 18:49:39 +0100
From: David Hildenbrand <david@...hat.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>,
linux-hyperv@...r.kernel.org
Cc: Wei Liu <wei.liu@...nel.org>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Michael Kelley <mikelley@...rosoft.com>,
Dexuan Cui <decui@...rosoft.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH 2/2] hv_balloon: do adjust_managed_page_count() when
ballooning/un-ballooning
On 03.12.20 18:49, Vitaly Kuznetsov wrote:
> David Hildenbrand <david@...hat.com> writes:
>
>> On 02.12.20 17:12, Vitaly Kuznetsov wrote:
>>> Unlike virtio_balloon/virtio_mem/xen balloon drivers, Hyper-V balloon driver
>>> does not adjust managed pages count when ballooning/un-ballooning and this leads
>>> to incorrect stats being reported, e.g. unexpected 'free' output.
>>>
>>> Note, the calculation in post_status() seems to remain correct: ballooned out
>>> pages are never 'available' and we manually add dm->num_pages_ballooned to
>>> 'commited'.
>>>
>>> Suggested-by: David Hildenbrand <david@...hat.com>
>>> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
>>> ---
>>> drivers/hv/hv_balloon.c | 5 ++++-
>>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
>>> index da3b6bd2367c..8c471823a5af 100644
>>> --- a/drivers/hv/hv_balloon.c
>>> +++ b/drivers/hv/hv_balloon.c
>>> @@ -1198,6 +1198,7 @@ static void free_balloon_pages(struct hv_dynmem_device *dm,
>>> __ClearPageOffline(pg);
>>> __free_page(pg);
>>> dm->num_pages_ballooned--;
>>> + adjust_managed_page_count(pg, 1);
>>> }
>>> }
>>>
>>> @@ -1238,8 +1239,10 @@ static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm,
>>> split_page(pg, get_order(alloc_unit << PAGE_SHIFT));
>>>
>>> /* mark all pages offline */
>>> - for (j = 0; j < alloc_unit; j++)
>>> + for (j = 0; j < alloc_unit; j++) {
>>> __SetPageOffline(pg + j);
>>> + adjust_managed_page_count(pg + j, -1);
>>> + }
>>>
>>> bl_resp->range_count++;
>>> bl_resp->range_array[i].finfo.start_page =
>>>
>>
>> I assume this has been properly tested such that it does not change the
>> system behavior regarding when/how HyperV decides to add/remove memory.
>>
>
> I'm always reluctant to confirm 'proper testing' as no matter how small
> and 'obvious' the change is, regressions keep happening :-) But yes,
> this was tested on a Hyper-V host and 'stress' and I observed 'free'
> when the balloon was both inflated and deflated, values looked sane.
That;s what I wanted to hear ;)
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists