[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100312085244.98e48991.kamezawa.hiroyu@jp.fujitsu.com>
Date: Fri, 12 Mar 2010 08:52:44 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Andrea Righi <arighi@...eler.com>
Cc: Vivek Goyal <vgoyal@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
Trond Myklebust <trond.myklebust@....uio.no>,
Suleiman Souhlal <suleiman@...gle.com>,
Greg Thelen <gthelen@...gle.com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Andrew Morton <akpm@...ux-foundation.org>,
containers@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH -mmotm 0/5] memcg: per cgroup dirty limit (v6)
On Fri, 12 Mar 2010 00:27:09 +0100
Andrea Righi <arighi@...eler.com> wrote:
> On Thu, Mar 11, 2010 at 10:03:07AM -0500, Vivek Goyal wrote:
> > I am still setting up the system to test whether we see any speedup in
> > writeout of large files with-in a memory cgroup with small memory limits.
> > I am assuming that we are expecting a speedup because we will start
> > writeouts early and background writeouts probably are faster than direct
> > reclaim?
>
> mmh... speedup? I think with a large file write + reduced dirty limits
> you'll get a more uniform write-out (more frequent small writes),
> respect to few and less frequent large writes. The system will be more
> reactive, but I don't think you'll be able to see a speedup in the large
> write itself.
>
Ah, sorry. I misunderstood something. But it's depends on dirty_ratio param.
If
background_dirty_ratio = 5
dirty_ratio = 100
under 100M cgroup, I think background write-out will be a help.
(nonsense ? ;)
And I wonder make -j can get better number....Hmm.
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists