[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0806291440030.6036@blonde.site>
Date: Sun, 29 Jun 2008 14:51:08 +0100 (BST)
From: Hugh Dickins <hugh@...itas.com>
To: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>,
Lee Schermerhorn <lee.schermerhorn@...com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Balbir Singh <balbir@...ibm.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] splitlru: memcg swapbacked pages active
On Sun, 29 Jun 2008, KOSAKI Motohiro wrote:
>
> Well...
> you are right.
Not proved! This is all quite complex. But it looks that way.
>
> Hmm.. OK, I propse alternative way.
>
> step1: commit this patch
> step2: implement active/inactive anon balancing routine
That sounds a good plan to me.
>
> Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
>
> Thank you for good patch.
Thank you; and please accept my apology for not Cc'ing you on
the patches - I had intended to, but forgot just when sending.
>
> btw, furtunately, memcg reclaim has some retry.
> thus, lru imbalancing doesn't cause OOM, it only cause small performance degression.
> IMHO your patch doesn't have any risk.
These things can work out so differently in practice than one would
expect. Tests seem to be chugging along okay with the change in,
but I haven't explicitly tested performance with and without.
It just seems a sensible starting point to have the global and
memcg views in synch: if departing from that proves to work better,
then we should do so later.
I've no view yet on the performance characteristics of any of this,
just focussed on getting tmpfs working correctly again. The notion
of keeping unevictable pages away from the regularly scanned lists
seems an obvious gain; but the swapbacked/filebacked distinction
is not so obvious to me (and particularly problematic for tmpfs).
Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists