[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <592ff175-3567-9cb0-2815-720a93017db7@prgmr.com>
Date: Thu, 16 Nov 2017 12:54:00 -0800
From: Sarah Newman <srn@...mr.com>
To: Nikolay Aleksandrov <nikolay@...ulusnetworks.com>,
Andrew Lunn <andrew@...n.ch>
Cc: Willy Tarreau <w@....eu>, netdev@...r.kernel.org,
roopa <roopa@...ulusnetworks.com>
Subject: Re: [PATCH] net: bridge: add max_fdb_count
On 11/16/2017 11:36 AM, Nikolay Aleksandrov wrote:
> On 16 November 2017 21:23:25 EET, Andrew Lunn <andrew@...n.ch> wrote:
>>> Linux bridges can also be used in small embedded devices. With no
>> limit,
>>> the likely result from those devices being attacked is the device
>> gets
>>> thrown away for being unreliable.
>>
>> Hi Sarah
>>
>> Just to get a gut feeling...
>>
>> struct net_bridge_fdb_entry is 40 bytes.
>>
>> My WiFi access point which is also a 5 port bridge, currently has 97MB
>> free RAM. That is space for about 2.5M FDB entries. So even Roopa's
>> 128K is not really a problem, in terms of memory.
The recommendation was a default maximum of ~4B entries rather than 128k or 256k entries.
2.5M entries over a 300 second aging period is ~8.3kpps.
>>> Maybe what's needed is two thresholds, one for warning and one for
>> enforcement.
>>> The warning limit would need to be low enough that the information
>> had a good chance
>>> of being logged before the system was under too much load to be able
>> to convey
>>> that information. The enforcement limit could be left as default
>> inactive until
>>> shown that it needed to be otherwise.
>>
>> What exactly is the problem here? Does the DoS exhaust memory, or does
>> the hashing algorithm not scale?
My personal observation was 100% CPU usage, not memory exhaustion. Others have documented memory exhaustion.
>
> Just a note - when net-next opens I'll send patches
> which move the fdb to a resizeable hashtable that scales nicely even with hundreds of thousands of entries so only the memory issue will remain.
Thank you.
I believe that under attack, the number of entries could exceed hundreds of thousands when accumulated over the default aging time. Perhaps it would
still make sense to support a hard limit, even if it is quite high by default?
>
>>
>> It is more work, but the table could be more closely tied to the
>> memory management code. When memory is getting low, callbacks are made
>> asking to free up memory. Register such a callback and throw away part
>> of the table when memory is getting low. There is then no need to
>> limit the size, but print a rate limited warning when asked to reduce
>> the size.
That sounds reasonable, though I think it would only trigger in the small embedded devices before CPU usage became an issue.
--Sarah
Powered by blists - more mailing lists