lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0905081026210.21690@qirst.com>
Date:	Fri, 8 May 2009 10:29:32 -0400 (EDT)
From:	Christoph Lameter <cl@...ux.com>
To:	Pekka Enberg <penberg@...helsinki.fi>
cc:	Cyrill Gorcunov <gorcunov@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>, mingo@...e.hu,
	mel@....ul.ie, linux-kernel@...r.kernel.org, riel@...hat.com,
	rientjes@...gle.com, xemul@...nvz.org
Subject: Re: [RFC/PATCH v2] mm: Introduce GFP_PANIC for non-failing
 allocations

On Fri, 8 May 2009, Pekka Enberg wrote:

> On Fri, 8 May 2009, Pekka Enberg wrote:
> > > +#define GFP_PANIC	(__GFP_NOFAIL | __GFP_NORETRY | __GFP_NOMEMALLOC)
>
> On Fri, 2009-05-08 at 10:20 -0400, Christoph Lameter wrote:
> > So this means not retrying the allocation a couple of times? Not delving
> > into reserve pools? Such behavior is good for a allocation that causes a
> > panic if it fails?
>
> If you do GFP_KERNEL|GFP_PANIC, we will cond_resched() and retry if we
> made some progress. So yes, I think the behavior is good for early-boot
> call-sites that can't really fail anyway.

Better make sure that GFP_PANIC is only used during early boot then.

If memory is low on boot (due to node hotplug or some such thing, powerpc
may do evil tricks here) then the panic may trigger after the patch.
We would have just delved into the reserves a bit before.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ