[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51910552.5050507@redhat.com>
Date: Mon, 13 May 2013 11:22:58 -0400
From: Rik van Riel <riel@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
CC: Luiz Capitulino <lcapitulino@...hat.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
aquini@...hat.com, amit.shah@...hat.com, anton@...msg.org
Subject: Re: [RFC 2/2] virtio_balloon: auto-ballooning support
On 05/13/2013 11:16 AM, Michael S. Tsirkin wrote:
> However, there's a big question mark: host specifies
> inflate, guest says deflate, who wins?
If we're dealing with a NUMA guest, they could both win :)
The host could see reduced memory use of the guest in one
place, while the guest could see increased memory availability
in another place...
I also suspect that having some "churn" could help sort out
exactly what the working set is.
> At some point Google sent patches that gave guest
> complete control over the balloon.
> This has the advantage that management isn't involved.
I believe the Google patches still included some way for the
host to initiate balloon inflation on the guest side, because
the guest internal state alone is not enough to see when the
host is under memory pressure.
I discussed the project with the Google developers in question
a little over a year ago, but I do not remember whether their
pressure notification went through qemu, or directly from the
host kernel to the guest kernel...
> And at some level it seems to make sense: why set
> an upper limit on size of the balloon?
> The bigger it is, the better.
Response time.
If too much of a guest's memory has been removed, it can take
too long for the guest to react to user requests, be it over
the web or ssh or something else...
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists