lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 15 Jun 2009 12:12:54 +0200
From:	Nick Piggin <npiggin@...e.de>
To:	Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc:	Pekka Enberg <penberg@...helsinki.fi>,
	Heiko Carstens <heiko.carstens@...ibm.com>,
	torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
	akpm@...ux-foundation.org, cl@...ux-foundation.org,
	kamezawa.hiroyu@...fujitsu.com, lizf@...fujitsu.com, mingo@...e.hu,
	yinghai@...nel.org
Subject: Re: [GIT PULL v2] Early SLAB fixes for 2.6.31

On Mon, Jun 15, 2009 at 07:51:16PM +1000, Benjamin Herrenschmidt wrote:
> On Mon, 2009-06-15 at 11:41 +0200, Nick Piggin wrote:
> > 
> > > Btw, you should not need to use GFP_NOWAIT anymore and GFP_KERNEL
> > > should be fine even during early boot.
> > 
> > Is this the agreed way forward? 
> 
> Yes.
> 
> > I would like to maybe continue to
> > try having early allocations pass in special flags where possible
> > (it could even be a GFP_BOOT or something). It can make it easier
> > to perhaps reduce branches in core code in future and things can
> > be flagged in warnings....
> 
> The whole point of the exercise in removing the need for alloc_bootmem
> in a whole bunch of code is defeated if you now also want specific flags
> passed. I think we can cope reasonably easily.

Why? The best reason to use slab allocator is that the allocations
are much more efficient and also can be freed later.

 
> > I just like the idea of keeping such annotations.
> 
> I think the boot order is too likely to change to make it a sane thing
> to have all call sites "know" at what point they are in the boot
> process.

I disagree.

> In your example, what does GFP_BOOT would mean ? Before
> scheduler is initialized ? before interrupts are on ?

Before initcalls is probably easiest. But it really does not
matter that much. Why? Because if we run out of memory before
then, then there is not going to be anything to reclaim
anyway.


> There's just too much stuff involved and we don't want random
> allocations in various subsystem or arch code to be done with that
> special knowledge of where specifically in that process they are done.

If they're done that early, of course they have to know where
they are because they only get to use a subset of kernel
services depending exactly on what has already been done.

> Especially since it may change.

"it" meaning the ability to reclaim memory? Not really. Not a
significant amount of memory may be reclaimed really until
after init process starts running.
 

> Additionally, I believe the flag test/masking can be moved easily enough
> out of the fast path... slub shouldn't need it there afaik and if it's
> pushed down into the allocation of new slab's then it shouldn't be a big
> deal.

Given that things have been apparently coping fine so far, I
think it will be a backward step to just give up now and say
it is too hard simply because slab is available to use slightly
earlier.

It's not that the world is going to come to an end if we
can't remove the masking, but just maybe the information
can be used in future to avoid adding more overhead, or
maybe some other debugging features can be added or something.
I just think it is cleaner to go that way if possible, and
claiming that callers can't be expected to know what context
they clal the slab allocator from just sounds like a
contradiction to me.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ