[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6d6a94c50703270141u5e59f73dj8bef0de0cfed1924@mail.gmail.com>
Date: Tue, 27 Mar 2007 16:41:33 +0800
From: "Aubrey Li" <aubreylee@...il.com>
To: "Vaidyanathan Srinivasan" <svaidy@...ux.vnet.ibm.com>
Cc: "Linux Kernel" <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
ckrm-tech@...ts.sourceforge.net,
"Balbir Singh" <balbir@...ibm.com>,
"Srivatsa Vaddagiri" <vatsa@...ibm.com>, devel@...nvz.org,
xemul@...ru, "Paul Menage" <menage@...gle.com>,
"Christoph Lameter" <clameter@....com>,
"Rik van Riel" <riel@...hat.com>
Subject: Re: [PATCH 3/3][RFC] Containers: Pagecache controller reclaim
On 3/27/07, Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com> wrote:
> Correct, shrink_page_list() is called from shrink_inactive_list() but
> the above code is patched in shrink_active_list(). The
> 'force_reclaim_mapped' label is from function shrink_active_list() and
> not in shrink_page_list() as it may seem in the patch file.
>
> While removing pages from active_list, we want to select only
> pagecache pages and leave the remaining in the active_list.
> page_mapped() pages are _not_ of interest to pagecache controller
> (they will be taken care by rss controller) and hence we put it back.
> Also if the pagecache controller is below limit, no need to reclaim
> so we put back all pages and come out.
Oh, I just read the patch, not apply it to my local tree, I'm working
on 2.6.19 now.
So the question is, when vfs pagecache limit is hit, the current
implementation just reclaim few pages, so it's quite possible the
limit is hit again, and hence the reclaim code will be called again
and again, that will impact application performance.
-Aubrey
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists