[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1366382991.16391.6.camel@edumazet-glaptop>
Date: Fri, 19 Apr 2013 07:49:51 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
netdev@...r.kernel.org
Subject: Re: [net-next PATCH 2/3] net: fix enforcing of fragment queue hash
list depth
On Fri, 2013-04-19 at 14:19 +0200, Jesper Dangaard Brouer wrote:
> When removing the LRU system (which is the real bottleneck, see perf
> tests in cover mail), and doing direct hash cleaning we are trading-in
> accuracy.
>
You are mixing performance issues and correctness.
> The reason I don't want a too big hash table is the following.
>
> Worst case 1024 buckets * 130K bytes = 133 MBytes, which on smaller
> embedded systems is a lot of kernel memory we are permitting a remote
> host to "lock-down".
Thats pretty irrelevant, memory is limited by the total amount of memory
used by fragments, not by hash table size.
Its called /proc/sys/net/ipv4/ipfrag_high_thresh
It seems you me you are spending time on wrong things.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists