[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1459346588.2055.6.camel@sipsolutions.net>
Date: Wed, 30 Mar 2016 16:03:08 +0200
From: Johannes Berg <johannes@...solutions.net>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: Ben Greear <greearb@...delatech.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"linux-wireless@...r.kernel.org" <linux-wireless@...r.kernel.org>,
netdev <netdev@...r.kernel.org>, Thomas Graf <tgraf@...g.ch>
Subject: Re: Question on rhashtable in worst-case scenario.
On Wed, 2016-03-30 at 21:55 +0800, Herbert Xu wrote:
> Well to start with you should assess whether you really want to
> hash multiple objects with the same key. In particular, can an
> adversary generate a large number of such objects?
No, the only reason this happens is local - if you take the single
hardware and connect it to the same AP many times. This is what Ben is
doing - he's creating virtual interfaces on top of the same physical
hardware, and then connection all of these to the same AP, mostly for
testing the AP.
> If your conclusion is that yes you really want to do this, then
> we have the parameter insecure_elasticity that you can use to
> disable the rehashing based on chain length.
But we really don't want that either - in the normal case where you
don't create all these virtual interfaces for testing, you have a
certain number of peers that can vary a lot (zero to hundreds, in
theory thousands) where you *don't* have the same key, so we still want
to have the rehashing if the chains get longer in that case.
It's really just the degenerate case that Ben is creating locally
that's causing a problem, afaict, though it's a bit disconcerting that
rhashtable in general can cause strange failures at delete time.
johannes
Powered by blists - more mailing lists