[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <173244248654.1734440.17446111766467452028@noble.neil.brown.name>
Date: Sun, 24 Nov 2024 21:01:26 +1100
From: "NeilBrown" <neilb@...e.de>
To: "Herbert Xu" <herbert@...dor.apana.org.au>
Cc: "Kent Overstreet" <kent.overstreet@...ux.dev>,
"Thomas Graf" <tgraf@...g.ch>, netdev@...r.kernel.org
Subject: Re: rhashtable issue - -EBUSY
On Sun, 24 Nov 2024, Herbert Xu wrote:
> On Sun, Nov 24, 2024 at 08:25:38PM +1100, NeilBrown wrote:
> >
> > Failure should not just be extremely unlikely. It should be
> > mathematically impossible.
>
> Please define mathematically impossible. If you mean zero then
> it's pointless since modern computers are known to have non-zero
> failure rates.
mathematically assuming perfect hardware. i.e. an analysis of the
software would show there is no way for an error to be returned.
>
> If you have a definite value in mind, then we could certainly
> tailor the maximum elasticity to achieve that goal.
I don't think there is any need to change the elasticity. Rehashing
whenever the longest chain reaches 16 seems reasonable - though some
simple rate limiting to avoid a busy-loop with a bad hash function would
not be unwelcome.
But I don't see any justification for refusing an insertion because we
haven't achieved the short chains yet. Certainly a WARN_ON_ONCE or a
rate-limited WARN_ON might be appropriate. Developers should be told
when their hash function isn't good enough.
But requiring developers to test for errors and to come up with some way
to manage them (sleep and try again is all I can think of) doesn't help anyone.
Thanks,
NeilBrown
>
> Cheers,
> --
> Email: Herbert Xu <herbert@...dor.apana.org.au>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
>
Powered by blists - more mailing lists