[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090922100540.GD12254@csn.ul.ie>
Date: Tue, 22 Sep 2009 11:05:40 +0100
From: Mel Gorman <mel@....ul.ie>
To: Christoph Lameter <cl@...ux-foundation.org>
Cc: Nick Piggin <npiggin@...e.de>,
Pekka Enberg <penberg@...helsinki.fi>,
heiko.carstens@...ibm.com, sachinp@...ibm.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Tejun Heo <tj@...nel.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: Re: [RFC PATCH 0/3] Fix SLQB on memoryless configurations V2
On Mon, Sep 21, 2009 at 02:17:40PM -0400, Christoph Lameter wrote:
> On Mon, 21 Sep 2009, Mel Gorman wrote:
> > Can you spot if there is something fundamentally wrong with patch 2? I.e. what
> > is wrong with treating the closest node as local instead of only the
> > closest node?
>
> Depends on the way locking is done for percpu queues (likely lockless).
> A misidentification of the numa locality of an object may result in locks
> not being taken that should have been taken.
>
Ok, I'll continue looking from that perspective and see what comes out.
I've spotted a few possible anomolies which I'll stick into a separate
patch.
> > > Or just allow SLQB for !NUMA configurations and merge it now.
> > >
> >
> > Forcing SLQB !NUMA will not rattle out any existing list issues
> > unfortunately :(.
>
> But it will make SLQB work right in permitted configurations. The NUMA
> issues can then be fixed later upstream.
>
I'm going to punt the decision on this one to Pekka or Nick. My feeling
is leave it enabled for NUMA so it can be identified if it gets fixed
for some other reason - e.g. the stalls are due to a per-cpu problem as
stated by Sachin and SLQB happens to exasperate the problem.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists