[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120627094046.GE19023@amit.redhat.com>
Date: Wed, 27 Jun 2012 15:10:46 +0530
From: Amit Shah <amit.shah@...hat.com>
To: Frank Swiderski <fes@...gle.com>
Cc: Rusty Russell <rusty@...tcorp.com.au>,
"Michael S. Tsirkin" <mst@...hat.com>, riel@...hat.com,
Andrea Arcangeli <aarcange@...hat.com>, mikew@...gle.com,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
virtualization@...ts.linux-foundation.org
Subject: Re: [PATCH] Add a page cache-backed balloon device driver.
On (Tue) 26 Jun 2012 [13:32:58], Frank Swiderski wrote:
> This implementation of a virtio balloon driver uses the page cache to
> "store" pages that have been released to the host. The communication
> (outside of target counts) is one way--the guest notifies the host when
> it adds a page to the page cache, allowing the host to madvise(2) with
> MADV_DONTNEED. Reclaim in the guest is therefore automatic and implicit
> (via the regular page reclaim). This means that inflating the balloon
> is similar to the existing balloon mechanism, but the deflate is
> different--it re-uses existing Linux kernel functionality to
> automatically reclaim.
This is a good idea for a guest co-operative balloon driver. I don't
think it'll replace the original driver. The traditional balloon
model is essentially driven by the host to increase guest density on
the host. This driver can't work in that case. However, using both
the types of drivers will be helpful, as unused pages on the guest
will be able to be used by the host.
Balbir Singh had done some work earlier on a guest co-operative
balloon driver, but AFAIR it was with modification of the existing
virtio-balloon driver.
I don't think a separate driver is necessary for the functionality,
though. Perhaps just a new config space item which mentions how many
pages are present in the page cache, so that host do some accounting
as well.
Amit
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists