[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <063D6719AE5E284EB5DD2968C1650D6D1CB02607@AcuExch.aculab.com>
Date: Tue, 17 Mar 2015 13:06:52 +0000
From: David Laight <David.Laight@...LAB.COM>
To: "'tgraf@...g.ch'" <tgraf@...g.ch>,
Herbert Xu <herbert@...dor.apana.org.au>
CC: David Miller <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Eric Dumazet <eric.dumazet@...il.com>
Subject: RE: [v1 PATCH 1/14] rhashtable: Remove shift from bucket_table
From: Thomas Graf
> Sent: 17 March 2015 12:40
...
> I'm not claiming perfect hash functions and this is exactly why I
> think average utilization is not an optimal growth criteria because
> it gives very limited view into the actual chain lengths.
If you assume that all the entries are being looked up equally
often the interesting number ought to be the number of failed compares.
So you want to sum 'length * (length - 1)/2' and compare against the
total number of items (or the table size).
> What you describe above is a 100% utilization scenario. Initially
> we talked about 0.1% utilization and whether to resize & rehash if a
> single chain has length > 4. My answer is: yes we should resize &
> rehash or at least rehash in that case.
0.1% utilisation is about as realistic as 100%.
50-75% is probably a reasonable limit.
This will give some short chains.
> My point here is that a chain length of 4 may be a serious
> performance bottleneck already and that it might be worth to try
> and detect bad hashing distribution and attempt to fix it at an
> earlier stage while ruling out the possibility of endless rehashes.
If you have 4 items and they are all on one chain they I suspect that
nothing you do will actually split them.
OTOH if you have 1000 items and one chain of length 4 it really doesn't
matter. Unless, of course, someone arranges to flood the last item with
traffic - in which case you are going to lose anyway.
David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists