lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 3 Dec 2015 20:51:17 +0800
From:	Herbert Xu <herbert@...dor.apana.org.au>
To:	Phil Sutter <phil@....cc>, davem@...emloft.net,
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
	tgraf@...g.ch, fengguang.wu@...el.com, wfg@...ux.intel.com,
	lkp@...org
Subject: rhashtable: ENOMEM errors when hit with a flood of insertions

On Mon, Nov 30, 2015 at 06:18:59PM +0800, Herbert Xu wrote:
> 
> OK that's better.  I think I see the problem.  The test in
> rhashtable_insert_rehash is racy and if two threads both try
> to grow the table one of them may be tricked into doing a rehash
> instead.
> 
> I'm working on a fix.

While the EBUSY errors are gone for me, I can still see plenty
of ENOMEM errors.  In fact it turns out that the reason is quite
understandable.  When you pound the rhashtable hard so that it
doesn't actually get a chance to grow the table in process context,
then the table will only grow with GFP_ATOMIC allocations.

For me this starts failing regularly at around 2^19 entries, which
requires about 1024 contiguous pages if I'm not mistaken.

I've got fairly straightforward solution for this, but it does
mean that we have to add another level of complexity to the
rhashtable implementation.  So before I go there I want to be
absolutely sure that we need it.

I guess the question is do we care about users that pound rhashtable
in this fashion?

My answer would be yes but I'd like to hear your opinions.

My solution is to use a slightly more complex/less efficient hash
table when we fail the allocation in interrupt context.  Instead
of allocating contiguous pages, we'll simply switch to allocating
individual pages and have a master page that points to them.

On a 64-bit platform, each page can accomodate 512 entries.  So
with a two-level deep setup (meaning one extra access for a hash
lookup), this would accomodate 2^18 entries.  Three levels (two
extra lookups) will give us 2^27 entries, which should be enough.

When we do this we should of course schedule an async rehash so
that as soon as we get a chance we can move the entries into a
normal hash table that needs only a single lookup.

Cheers,
-- 
Email: Herbert Xu <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ