[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0Ud35pmmfAabYJijWo8qpucUWS8-OzBW=gsotfxZFuS9PQ@mail.gmail.com>
Date: Wed, 6 Mar 2019 10:00:05 -0800
From: Alexander Duyck <alexander.duyck@...il.com>
To: Nitesh Narayan Lal <nitesh@...hat.com>
Cc: kvm list <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Paolo Bonzini <pbonzini@...hat.com>, lcapitulino@...hat.com,
pagupta@...hat.com, wei.w.wang@...el.com,
Yang Zhang <yang.zhang.wz@...il.com>,
Rik van Riel <riel@...riel.com>,
David Hildenbrand <david@...hat.com>,
"Michael S. Tsirkin" <mst@...hat.com>, dodgen@...gle.com,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
dhildenb@...hat.com, Andrea Arcangeli <aarcange@...hat.com>
Subject: Re: [RFC][Patch v9 0/6] KVM: Guest Free Page Hinting
On Wed, Mar 6, 2019 at 7:51 AM Nitesh Narayan Lal <nitesh@...hat.com> wrote:
>
> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively.
>
> Benefit:
> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1).
>
> Changelog in v9:
> * Guest free page hinting hook is now invoked after a page has been merged in the buddy.
> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured.
> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages.
Without a kthread this has the potential to get really ugly really
fast. If we are going to run asynchronously we should probably be
truly asynchonous and just place a few pieces of data in the page that
a worker thread can use to identify which pages have been hinted and
which pages have not. Then we can have that one thread just walking
through the zone memory pulling out fixed size pieces at a time and
providing hints on that. By doing that we avoid the potential of
creating a batch of pages that eat up most of the system memory.
> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks.
> * Dynamically allocated space is used to hold the isolated guest free pages.
I have concerns that doing this per CPU and allocating memory
dynamically can result in you losing a significant amount of memory as
it sits waiting to be hinted.
> * All the pages are reported asynchronously to the host via virtio driver.
> * Pages are returned back to the guest buddy free list only when the host response is received.
I have been thinking about this. Instead of stealing the page couldn't
you simply flag it that there is a hint in progress and simply wait in
arch_alloc_page until the hint has been processed? The problem is in
stealing pages you are going to introduce false OOM issues when the
memory isn't available because it is being hinted on.
> Pending items:
> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests.
> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing)
> * Compare reporting free pages via vring with vhost.
> * Decide between MADV_DONTNEED and MADV_FREE.
> * Analyze overall performance impact due to guest free page hinting.
> * Come up with proper/traceable error-message/logs.
I'll try applying these patches and see if I can reproduce the results
you reported. With the last patch set I couldn't reproduce the results
as you reported them. It has me wondering if you were somehow seeing
the effects of a balloon instead of the actual memory hints as I
couldn't find any evidence of the memory ever actually being freed
back by the hints functionality.
Also do you have any idea if this patch set will work with an SMP
setup or is it still racy? I might try enabling SMP in my environment
to see if I can test the scalability of the VM with something like a
will-it-scale test.
> Tests:
> 1. Use-case - Number of guests we can launch
>
> NUMA Nodes = 1 with 15 GB memory
> Guest Memory = 5 GB
> Number of cores in guest = 1
> Workload = test allocation program allocates 4GB memory, touches it via memset and exits.
> Procedure =
> The first guest is launched and once its console is up, the test allocation program is executed with 4 GB memory request (Due to this the guest occupies almost 4-5 GB of memory in the host in a system without page hinting). Once this program exits at that time another guest is launched in the host and the same process is followed. We continue launching the guests until a guest gets killed due to low memory condition in the host.
>
> Results:
> Without hinting = 3
> With hinting = 5
>
> 2. Hackbench
> Guest Memory = 5 GB
> Number of cores = 4
> Number of tasks Time with Hinting Time without Hinting
> 4000 19.540 17.818
>
>
Powered by blists - more mailing lists