[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180406041348.GA14428@gondor.apana.org.au>
Date: Fri, 6 Apr 2018 12:13:48 +0800
From: Herbert Xu <herbert@...dor.apana.org.au>
To: NeilBrown <neilb@...e.com>
Cc: Thomas Graf <tgraf@...g.ch>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <eric.dumazet@...il.com>
Subject: Re: [PATCH 5/6] rhashtable: support guaranteed successful insertion.
On Fri, Apr 06, 2018 at 01:11:56PM +1000, NeilBrown wrote:
>
> You don't need to handle memory allocation failures at the point where
> you insert into the table - adding to a linked list requires no new
> memory.
You do actually. The r in rhashtable stands for resizable. We
cannot completely rely on a background hash table resize because
the insertions can be triggered from softirq context and be without
rate-limits of any kind (e.g., incoming TCP connections).
Therefore at some point you either have to wait for the resize or
fail the insertion. Since we cannot sleep then failing is the only
option.
> The likelihood of the error isn't really the issue - it either can
> happen or it cannot. If it can, it requires code to handle it.
Sure, but you just have to handle it the same way you would handle
a memory allocation failure, because that's what it essentially is.
I'm sorry if that means you have to restructure your code to do that
but that's what you pay for having a resizable hash table because
at some point you just have to perform a resize.
> Ahhh... I see what you are thinking now. You aren't suggestion a
> sharded count that is summed periodically. Rather you are suggesting that
> we divide the hash space into N regions each with their own independent
> counter, and resize the table if any one region reaches 70% occupancy -
> on the assumption that the other regions are likely to be close. Is
> that right?
Yes.
> It would probably be dangerous to allow automatic shrinking (when just
> one drops below 30%) in that case, but it might be OK for growing.
At the greatest granularity it would be a counter per bucket, so
clearly we need to maintain some kind of bound on the granularity
so our sample space is not too small.
Automatic shrinking shouldn't be an issue because that's the slow
kind of resizing that we punt to kthread context and it can afford
to do a careful count.
Cheers,
--
Email: Herbert Xu <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
Powered by blists - more mailing lists