lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 15 Apr 2009 18:46:06 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Theodore Tso <tytso@....edu>, knikanth@...e.de,
	jens.axboe@...cle.com, neilb@...e.de, linux-kernel@...r.kernel.org,
	chris.mason@...cle.com, shaggy@...tin.ibm.com,
	xfs-masters@....sgi.com
Subject: Re: [PATCH 0/6] Handle bio_alloc failure

On Wednesday 15 April 2009 04:46:04 Andrew Morton wrote:
> On Tue, 14 Apr 2009 14:16:32 -0400
> Theodore Tso <tytso@....edu> wrote:
> 
> > In include/linux/page_alloc.h,
> > __GFP_NOFAIL is documented as "will never fail", but it says
> > absolutely nothing about __GFP_WAIT.
> 
> In the present implementation, a __GFP_WAIT allocation for order <=3
> will only fail if the caller was oom-killed.
> 
> Which raises the question "what happens when a mempool_alloc() caller
> gets oom-killed?".
> 
> Seems that it will loop around in mempool_alloc() doing weak attempts
> to allocate memory, not doing direct reclaim while waiting for someone
> else to free something up.  hm.  I guess it'll recover eventually.

Yes, it doesn't have to reclaim anything (quite likely if we've
been OOM killed, reclaim is very difficult or impossible at this
point anyway). It will recover when an object gets returned to
the mempool by someone else. No point in using page allocator
reserve when we have guaranteed forward progress anyway.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ