[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180327155610.GD14001@gondor.apana.org.au>
Date: Tue, 27 Mar 2018 23:56:10 +0800
From: Herbert Xu <herbert@...dor.apana.org.au>
To: NeilBrown <neilb@...e.com>
Cc: Thomas Graf <tgraf@...g.ch>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/6] rhashtable: support guaranteed successful insertion.
On Tue, Mar 27, 2018 at 10:33:04AM +1100, NeilBrown wrote:
> The current rhashtable will fail an insertion if the hashtable
> it "too full", one of:
> - table already has 2^31 elements (-E2BIG)
> - a max_size was specified and table already has that
> many elements (rounded up to power of 2) (-E2BIG)
> - a single chain has more than 16 elements (-EBUSY)
> - table has more elements than the current table size,
> and allocating a new table fails (-ENOMEM)
> - a new page needed to be allocated for a nested table,
> and the memory allocation failed (-ENOMEM).
>
> A traditional hash table does not have a concept of "too full", and
> insertion only fails if the key already exists. Many users of hash
> tables have separate means of limiting the total number of entries,
> and are not susceptible to an attack which could cause unusually large
> hash chains. For those users, the need to check for errors when
> inserting objects to an rhashtable is an unnecessary burden and hence
> a potential source of bugs (as these failures are likely to be rare).
Did you actually encounter an insertion failure? The current code
should never fail an insertion until you actually run ouf memory.
That is unless you're using rhashtable when you should be using
rhlist instead.
Cheers,
--
Email: Herbert Xu <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
Powered by blists - more mailing lists