[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1179780873.7019.65.camel@twins>
Date: Mon, 21 May 2007 22:54:32 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Christoph Lameter <clameter@....com>
Cc: Matt Mackall <mpm@...enic.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Thomas Graf <tgraf@...g.ch>,
David Miller <davem@...emloft.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Daniel Phillips <phillips@...gle.com>,
Pekka Enberg <penberg@...helsinki.fi>,
Paul Jackson <pj@....com>, npiggin@...e.de
Subject: Re: [PATCH 0/5] make slab gfp fair
On Mon, 2007-05-21 at 13:32 -0700, Christoph Lameter wrote:
> On Mon, 21 May 2007, Peter Zijlstra wrote:
>
> > > This means we will disobey cpuset and memory policy constraints?
> >
> > >From what I can make of it, yes. Although I'm a bit hazy on the
> > mempolicy code.
>
> In an interrupt context we do not have a process context. But there is
> no exemption from memory policy constraints.
Ah, see below.
> > > > > No the gfp zone flags are not uniform and placement of page allocator
> > > > > allocs through SLUB do not always have the same allocation constraints.
> > > >
> > > > It has to; since it can serve the allocation from a pre-existing slab
> > > > allocation. Hence any page allocation must be valid for all other users.
> > >
> > > Why does it have to? This is not true.
> >
> > Say the slab gets allocated by an allocation from interrupt context; no
> > cpuset, no policy. This same slab must be valid for whatever allocation
> > comes next, right? Regardless of whatever policy or GFP_ flags are in
> > effect for that allocation.
>
> Yes sure if we do not have a context then no restrictions originating
> there can be enforced. So you want to restrict the logic now to
> interrupt allocs? I.e. GFP_ATOMIC?
No, any kernel alloc.
> > > The constraints come from the context of memory policies and cpusets. See
> > > get_any_partial().
> >
> > but get_partial() will only be called if the cpu_slab is full, up until
> > that point you have to do with whatever is there.
>
> Correct. That is an optimization but it may be called anytime from the
> perspective of an execution thread and that may cause problems with your
> approach.
I'm not seeing how this would interfere; if the alloc can be handled
from a partial slab, that is fine.
> > > > As far as I can see there cannot be a hard constraint here, because
> > > > allocations form interrupt context are at best node local. And node
> > > > affine zone lists still have all zones, just ordered on locality.
> > >
> > > Interrupt context is something different. If we do not have a process
> > > context then no cpuset and memory policy constraints can apply since we
> > > have no way of determining that. If you restrict your use of the reserve
> > > cpuset to only interrupt allocs then we may indeed be fine.
> >
> > No, what I'm saying is that if the slab gets refilled from interrupt
> > context the next process context alloc will have to work with whatever
> > the interrupt left behind. Hence there is no hard constraint.
>
> It will work with whatever was left behind in the case of SLUB and a
> kmalloc alloc (optimization there). It wont if its SLAB (which is
> stricter) or a kmalloc_node alloc. A kmalloc_node alloc will remove the
> current cpuslab if its not on the right now.
OK, that is my understanding too. So this should be good too.
> > > > >From what I can see, it takes pretty much any page it can get once you
> > > > hit it with PF_MEMALLOC. If the page allocation doesn't use ALLOC_CPUSET
> > > > the page can come from pretty much anywhere.
> > >
> > > No it cannot. One the current cpuslab is exhaused (which can be anytime)
> > > it will enforce the contextual allocation constraints. See
> > > get_any_partial() in slub.c.
> >
> > If it finds no partial slabs it goes back to the page allocator; and
> > when you allocate a page under PF_MEMALLOC and the normal allocations
> > are exhausted it takes a page from pretty much anywhere.
>
> If it finds no partial slab then it will go to the page allocator which
> will allocate given the current contextual alloc constraints.
> In the case
> of a memory policy we may have limited the allocations to a single node
> where there is no escape (the zonelist does *not* contain zones of other
> nodes).
Ah, this is the point I was missing; I assumed each zonelist would
always include all zones, but would just continue/break the loop using
things like cpuset_zone_allwed_*().
This might indeed foil the game.
I could 'fix' this by doing the PF_MEMALLOC allocation from the regular
node zonelist instead of from the one handed down....
/me thinks out loud.. since direct reclaim runs in whatever process
context was handed out we're stuck with whatever policy we started from;
but since the allocations are kernel allocs - not userspace allocs, and
we're in dire straights, it makes sense to violate the tasks restraints
in order to keep the machine up.
memory policies are the only ones with 'short' zonelists, right? CPU
sets are on top of whatever zonelist is handed out, and the normal
zonelists include all nodes - ordered by distance
> The only chance to bypass this is by only dealing with allocations
> during interrupt that have no allocation context.
But you just said that interrupts are not exempt from memory policies,
and policies are the only ones that have 'short' zonelists. /me
confused.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists