lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20150514.222217.1991822787577994078.davem@davemloft.net>
Date:	Thu, 14 May 2015 22:22:17 -0400 (EDT)
From:	David Miller <davem@...emloft.net>
To:	herbert@...dor.apana.org.au
Cc:	johannes@...solutions.net, netdev@...r.kernel.org, kaber@...sh.net,
	tgraf@...g.ch, johannes.berg@...el.com
Subject: Re: rhashtable: Add cap on number of elements in hash table

From: Herbert Xu <herbert@...dor.apana.org.au>
Date: Wed, 13 May 2015 16:06:40 +0800

> We currently have no limit on the number of elements in a hash table.
> This is a problem because some users (tipc) set a ceiling on the
> maximum table size and when that is reached the hash table may
> degenerate.  Others may encounter OOM when growing and if we allow
> insertions when that happens the hash table perofrmance may also
> suffer.
> 
> This patch adds a new paramater insecure_max_entries which becomes
> the cap on the table.  If unset it defaults to max_size.  If it is
> also zero it means that there is no cap on the number of elements
> in the table.  However, the table will grow whenever the utilisation
> hits 100% and if that growth fails, you will get ENOMEM on insertion.
> 
> As allowing >100% utilisation is potentially dangerous, the name
> contains the word insecure.
> 
> Note that the cap is not a hard limit.  This is done for performance
> reasons as enforcing a hard limit will result in use of atomic ops
> that are heavier than the ones we currently use.
> 
> The reasoning is that we're only guarding against a gross over-
> subscription of the table, rather than a small breach of the limit.
> 
> Signed-off-by: Herbert Xu <herbert@...dor.apana.org.au>

I'm not so sure I can get behind this change.

For the default case where we propagate max_size into this new limt,
if we grow the hash table to it's maximum size, and then just go one
past the limit, we're going to start failing inserts.

In my opinion, up to at least 2 X max_size, it's safe to allow the
insert.  Assuming a well choosen hash function and a roughly even
distribution.

A two entry deep hash chain is not a reason to fail a hash insertion.

Failures on hash insertions we would not do in pretty much any other
hash implementation in the kernel.  I'd seriously would rather have
3 entry deep hash chains than no new connections at all.

If an administrator has to analyze some situation where this is
happening and they see -E2BIG propagating into their applications,
this is going to be surprising.  And when they do all of the work
to figure out what is causing it, I am pretty sure they will be
somewhat upset that we considered it OK to do this.

So I'd like you to take some time to reconsider hash insert failures,
and only do them when it's going to create an unnaceptable, _massive_
hardship for the machine.  And I do not think that 2 or 3 entry deep
hash chains quality.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ