lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0705211239300.27622@schroedinger.engr.sgi.com>
Date:	Mon, 21 May 2007 12:43:57 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
cc:	Matt Mackall <mpm@...enic.com>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, Thomas Graf <tgraf@...g.ch>,
	David Miller <davem@...emloft.net>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Daniel Phillips <phillips@...gle.com>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Paul Jackson <pj@....com>, npiggin@...e.de
Subject: Re: [PATCH 0/5] make slab gfp fair

On Mon, 21 May 2007, Peter Zijlstra wrote:

> > So the original issue is still not fixed. A slab alloc may succeed without
> > watermarks if that particular allocation is restricted to a different set 
> > of nodes. Then the reserve slab is dropped despite the memory scarcity on
> > another set of nodes?
> 
> I can't see how. This extra ALLOC_MIN|ALLOC_HIGH|ALLOC_HARDER alloc will
> first deplete all other zones. Once that starts failing no node should
> still have pages accessible by any allocation context other than
> PF_MEMALLOC.

This means we will disobey cpuset and memory policy constraints?

> > No the gfp zone flags are not uniform and placement of page allocator 
> > allocs through SLUB do not always have the same allocation constraints.
> 
> It has to; since it can serve the allocation from a pre-existing slab
> allocation. Hence any page allocation must be valid for all other users.

Why does it have to? This is not true.

> > SLUB will check the node of the page that was allocated when the page 
> > allocator returns and put the page into that nodes slab list. This varies
> > depending on the allocation context.
> 
> Yes, it keeps slabs on per node lists. I'm just not seeing how this puts
> hard constraints on the allocations.

The constraints come from the context of memory policies and cpusets. See
get_any_partial().
 
> As far as I can see there cannot be a hard constraint here, because
> allocations form interrupt context are at best node local. And node
> affine zone lists still have all zones, just ordered on locality.

Interrupt context is something different. If we do not have a process 
context then no cpuset and memory policy constraints can apply since we
have no way of determining that. If you restrict your use of the reserve 
cpuset to only interrupt allocs then we may indeed be fine.
 
> > Allocations can be particular to uses of a slab in particular situations. 
> > A kmalloc cache can be used to allocate from various sets of nodes in 
> > different circumstances. kmalloc will allow serving a limited number of 
> > objects from the wrong nodes for performance reasons but the next 
> > allocation from the page allocator (or from the partial lists) will occur 
> > using the current set of allowed nodes in order to ensure a rough 
> > obedience to the memory policies and cpusets. kmalloc_node behaves 
> > differently and will enforce using memory from a particular node.
> 
> >From what I can see, it takes pretty much any page it can get once you
> hit it with PF_MEMALLOC. If the page allocation doesn't use ALLOC_CPUSET
> the page can come from pretty much anywhere.

No it cannot. One the current cpuslab is exhaused (which can be anytime) 
it will enforce the contextual allocation constraints. See 
get_any_partial() in slub.c.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ