[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20111024145916.GA18070@sgi.com>
Date: Mon, 24 Oct 2011 09:59:16 -0500
From: Dimitri Sivanich <sivanich@....com>
To: Christoph Lameter <cl@...two.org>
Cc: David Rientjes <rientjes@...gle.com>,
Andi Kleen <andi@...stfloor.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mel@....ul.ie>
Subject: Re: [PATCH] Reduce vm_stat cacheline contention in
__vm_enough_memory
On Wed, Oct 19, 2011 at 10:31:54AM -0500, Christoph Lameter wrote:
> On Wed, 19 Oct 2011, Dimitri Sivanich wrote:
>
> > For 120 threads writing in parallel (each to it's own mountpoint), the
> > threshold needs to be on the order of 1000. At a threshold of 750, I
> > start to see a slowdown of 50-60 MB/sec.
> >
> > For 400 threads writing in parallel, the threshold needs to be on the order
> > of 2000 (although we're off by about 40 MB/sec at that point).
> >
> > The necessary deltas in these cases are quite a bit higher than the current
> > 125 maximum (see calculate*threshold in mm/vmstat.c).
> >
> > I like the idea of having certain areas triggering vm_stat sync, as long
> > as we know what those key areas are and how often they might be called.
>
> You could potentially reduce the maximum necessary by applying my earlier
> patch (but please reduce the counters touched to the current cacheline).
> That should reduce the number of updates in the global cacheline and allow
> you to reduce the very high deltas that you have to deal with now.
I tried updating whole, single vm_stat cachelines as you suggest, but that
made little if any difference in tmpfs writeback performance. The same higher
threshold values were still necessary to significantly reduce the contention
seen in __vm_enough_memory.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists