[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48E6121E.8050100@linux-foundation.org>
Date: Fri, 03 Oct 2008 07:37:50 -0500
From: Christoph Lameter <cl@...ux-foundation.org>
To: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
CC: Andy Whitcroft <apw@...dowen.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Rik van Riel <riel@...hat.com>, Mel Gorman <mel@....ul.ie>,
Nick Piggin <nickpiggin@...oo.com.au>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 4/4] capture pages freed during direct reclaim for allocation
by the reclaimer
KOSAKI Motohiro wrote:
>> Parallel allocations are less a problem if the freed order 0 pages get merged
>> immediately into the order 1 freelist. Of course that will only work 50% of
>> the time but it will have a similar effect to this patch.
>
> Ah, Right.
> Could we hear why you like pcp disabling than Andy's patch?
Its simpler code wise.
> Honestly, I think pcp has some problem.
pcps are a particular problem on NUMA because the lists are replicated per
zone and per processor.
> But I avoid to change pcp because I don't understand its design.
In the worst case we see that pcps cause a 5% performance drop (sequential
alloc w/o free followed by sequential free w/o allocs). See my page allocator
tests in my git tree.
> Maybe, we should discuss currect pcp behavior?
pcps need improvement. The performance issues with the page allocator fastpath
are likely due to bloating of the fastpaths (antifrag did not do much good on
that level). Plus current crops of processors are sensitive to cache footprint
issues (seems that the tbench regression in the network stack also are due to
the same effect). Doubly linked lists are not good today because they touch
multiple cachelines.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists