lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200902031253.28078.nickpiggin@yahoo.com.au>
Date:	Tue, 3 Feb 2009 12:53:26 +1100
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	Christoph Lameter <cl@...ux-foundation.org>
Cc:	Nick Piggin <npiggin@...e.de>,
	Pekka Enberg <penberg@...helsinki.fi>,
	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
	Lin Ming <ming.m.lin@...el.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [patch] SLQB slab allocator

On Tuesday 27 January 2009 04:28:03 Christoph Lameter wrote:
> n Fri, 23 Jan 2009, Nick Piggin wrote:
> > According to memory policies, a task's memory policy is supposed to
> > apply to its slab allocations too.
>
> It does apply to slab allocations. The question is whether it has to apply
> to every object allocation or to every page allocation of the slab
> allocators.

Quite obviously it should. Behaviour of a slab allocation on behalf of
some task constrained within a given node should not depend on the task
which has previously run on this CPU and made some allocations. Surely
you can see this behaviour is not nice.


> > > Memory policies are applied in a fuzzy way anyways. A context switch
> > > can result in page allocation action that changes the expected
> > > interleave pattern. Page populations in an address space depend on the
> > > task policy. So the exact policy applied to a page depends on the task.
> > > This isnt an exact thing.
> >
> > There are other memory policies than just interleave though.
>
> Which have similar issues since memory policy application is depending on
> a task policy and on memory migration that has been applied to an address
> range.

What similar issues? If a task ask to have slab allocations constrained
to node 0, then SLUB hands out objects from other nodes, then that's bad.


> > But that is wrong. The lists obviously have high water marks that
> > get trimmed down. Periodic trimming as I keep saying basically is
> > alrady so infrequent that it is irrelevant (millions of objects
> > per cpu can be allocated anyway between existing trimming interval)
>
> Trimming through water marks and allocating memory from the page allocator
> is going to be very frequent if you continually allocate on one processor
> and free on another.

Um yes, that's the point. But you previously claimed that it would just
grow unconstrained. Which is obviously wrong. So I don't understand what
your point is.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ