[an error occurred while processing this directive]
[an error occurred while processing this directive]
|
|
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20150514.234615.235930228362522399.davem@davemloft.net>
Date: Thu, 14 May 2015 23:46:15 -0400 (EDT)
From: David Miller <davem@...emloft.net>
To: herbert@...dor.apana.org.au
Cc: johannes@...solutions.net, netdev@...r.kernel.org, kaber@...sh.net,
tgraf@...g.ch, johannes.berg@...el.com
Subject: Re: rhashtable: Add cap on number of elements in hash table
From: Herbert Xu <herbert@...dor.apana.org.au>
Date: Fri, 15 May 2015 11:06:23 +0800
> On Thu, May 14, 2015 at 10:22:17PM -0400, David Miller wrote:
>>
>> In my opinion, up to at least 2 X max_size, it's safe to allow the
>> insert. Assuming a well choosen hash function and a roughly even
>> distribution.
>
> OK I can make it 2 x max_size/table size.
The rest of my email after what you quoted was intended to get one
to consider this issue generally. :-)
We wouldn't fail these inserts in any other hash table in the kernel.
Would we stop making new TCP sockets if the TCP ehash chains are 3
entries deep? 4? 5? The answer to all of those is of course no
for any hash chain length of N whatsoever.
This new rhashtable behavior would be the default, and I seriously
doubt that's a behavior people who use a hash table, generally
speaking, desire or want.
Should there perhaps be hard protections for _extremely_ long hash
chains? Sure, I'm willing to entertain that kind of idea. But I
would do so at the very far end of the spectrum. To the point where
the hash table is degenerating into a linked list.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists