[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1366045943.11284.67.camel@localhost>
Date: Mon, 15 Apr 2013 19:12:23 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Hannes Frederic Sowa <hannes@...essinduktion.org>,
netdev@...r.kernel.org, "David S. Miller" <davem@...emloft.net>
Subject: Re: [RFC PATCH] inet: fix enforcing of fragment queue hash list
depth
On Mon, 2013-04-15 at 09:23 -0700, Eric Dumazet wrote:
> Allowing thousand of fragments and keeping a 64 slot hash table is not
> going to work.
>
> depths of 128 are just insane.
I fully agree, my plan was actually to reduce this to 5 or 10 depth
limit. I just noticed this problem with Hannes patch, while working on
your idea of direct hash cleaning, and then I just/only extracted the
parts that was relevant for fixing Hannes patch.
> Really Jesper, you'll need to make the hash table dynamic, if you really
> care.
My plan/idea is to make the hash tables size depend on the available
memory. As on small memory devices, we are opening up for (an attack
vector where) remote hosts can pin-down a large portion of their memory,
which we want to avoid. (And you don't even need a port in listen
state).
How dynamic do you want it? Would initial sizing based on memory be
enough, or should I also add a proc/sysctl option for changing the hash
size from userspace?
--Jesper
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists