lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20111221163240.ef73f77e.akpm@linux-foundation.org>
Date:	Wed, 21 Dec 2011 16:32:40 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Tejun Heo <tj@...nel.org>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mempool: fix first round failure behavior

On Wed, 21 Dec 2011 16:19:39 -0800
Tejun Heo <tj@...nel.org> wrote:

> For the initial allocation, mempool passes modified gfp mask to the
> backing allocator so that it doesn't try too hard when there are
> reserved elements waiting in the pool; however, when that allocation
> fails and pool is empty too, it either waits for the pool to be
> replenished before retrying or fails if !__GFP_WAIT.
> 
> * If the caller was calling in with GFP_ATOMIC, it never gets to try
>   emergency reserve.  Allocations which would have succeeded without
>   mempool may fail, which is just wrong.
> 
> * Allocation which could have succeeded after a bit of reclaim now has
>   to wait on the reserved items and it's not like mempool doesn't
>   retry with the original gfp mask.  It just does that *after* someone
>   returns an element, pointlessly delaying things.

This is a significant change in behaviour.  Previously the mempool code
would preserve emergency pools while waiting for someone to return an
item.  Now, it will permit many more items to be allocated, chewing
into the emergency pools.

We *know* that items will soon become available, so why not wait for
that to happen rather than consuming memory which less robust callers
could have utilised?

IOW, this change appears to make the kernel more vulnerable to memory
exhaustion failures?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ