[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190329084058-mutt-send-email-mst@kernel.org>
Date: Fri, 29 Mar 2019 09:26:19 -0400
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Nitesh Narayan Lal <nitesh@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, pbonzini@...hat.com, lcapitulino@...hat.com,
pagupta@...hat.com, wei.w.wang@...el.com, yang.zhang.wz@...il.com,
riel@...riel.com, david@...hat.com, dodgen@...gle.com,
konrad.wilk@...cle.com, dhildenb@...hat.com, aarcange@...hat.com,
alexander.duyck@...il.com
Subject: On guest free page hinting and OOM
On Wed, Mar 06, 2019 at 10:50:42AM -0500, Nitesh Narayan Lal wrote:
> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively.
Sorry about breaking the thread: the original subject was
KVM: Guest Free Page Hinting
but the following isn't in a response to a specific patch
so I thought it's reasonable to start a new one.
What bothers both me (and others) with both Nitesh's asynchronous approach
to hinting and the hinting that is already supported in the balloon
driver right now is that it seems to have the potential to create a fake OOM situation:
the page that is in the process of being hinted can not be used. How
likely that is would depend on the workload so is hard to predict.
Alex's patches do not have this problem as they block the
VCPUs from attempting to get new pages during hinting. Solves the fake OOM
issue but adds blocking which most of the time is not necessary.
With both approaches there's a tradeoff: hinting is more efficient if it
hints about large sized chunks of memory at a time, but as that size
increases, chances of being able to hold on to that much memory at a
time decrease. One can claim that this is a regular performance/memory
tradeoff however there is a difference here: normally
guest performance is traded off for host memory (which host
knows how much there is of), this trades guest performance
for guest memory, but the benefit is on the host, not on
the guest. Thus this is harder to manage.
I have an idea: how about allocating extra guest memory on the host? An
extra hinting buffer would be appended to guest memory, with the
understanding that it is destined specifically to improve page hinting.
Balloon device would get an extra parameter specifying the
hinting buffer size - e.g. in the config space of the driver.
At driver startup, it would get hold of the amount of
memory specified by host as the hinting buffer size, and keep it around in a
buffer list - if no action is taken - forever. Whenever balloon would
want to get hold of a page of memory and send it to host for hinting, it
would release a page of the same size from the buffer into the free
list: a new page swaps places with a page in the buffer.
In this way the amount of useful free memory stays constant.
Once hinting is done page can be swapped back - or just stay
in the hinting buffer until the next hint.
Clearly this is a memory/performance tradeoff: the more memory host can
allocate for the hinting buffer, the more batching we'll get so hints
become cheaper. One notes that:
- if guest memory isn't pinned, this memory is virtual and can
be reclaimed by host. In partucular guest can hint about the
memory within the hinting buffer at startup.
- guest performance/host memory tradeoffs are reasonably well understood, and
so it's easier to manage: host knows how much memory it can
sacrifice to gain the benefit of hinting.
Thoughts?
--
MST
Powered by blists - more mailing lists