[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <286AC319A985734F985F78AFA26841F73E42AD6D@shsmsx102.ccr.corp.intel.com>
Date: Mon, 10 Feb 2020 07:27:25 +0000
From: "Wang, Wei W" <wei.w.wang@...el.com>
To: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>,
"tysand@...gle.com" <tysand@...gle.com>,
"mst@...hat.com" <mst@...hat.com>,
"david@...hat.com" <david@...hat.com>,
"alexander.h.duyck@...ux.intel.com"
<alexander.h.duyck@...ux.intel.com>,
"rientjes@...gle.com" <rientjes@...gle.com>,
"mhocko@...nel.org" <mhocko@...nel.org>,
"namit@...are.com" <namit@...are.com>
Subject: RE: [PATCH RFC] virtio_balloon: conservative balloon page shrinking
On Monday, February 10, 2020 11:57 AM, Tetsuo Handa wrote:
> Then, "node-A's NR_FILE_PAGES is already 0 and node-B's NR_FILE_PAGES is
> not 0, but allocation request which triggered this shrinker wants to allocate
> from only node-A"
> would be confused by this change, for the pagecache pages for allocating
> thread's interested node are already depleted but the balloon cannot shrink
> when it should because the pagecache pages for allocating thread's
> uninterested nodes are not yet depleted.
The existing balloon isn't numa aware. "but the balloon cannot shrink " - even we
let balloon to shrink, it could shrink pages from the uninterested node.
When we have a numa aware balloon, we could further update the shrinker
to check with the per node counter , node_page_state(NR_FILE_PAGES).
>
> >
> Well, my comment is rather: "Do not try to reserve guest's memory. In other
> words, do not try to maintain balloons on the guest side. Since host would
> be able to cache file data on the host's cache, guests would be able to
> quickly fetch file data from host's cache via normal I/O requests." ;-)
Didn't this one. The discussion was about guest pagecache pages v.s. guest balloon pages.
Why is host's pagecache here?
Best,
Wei
Powered by blists - more mailing lists