lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1284117374.30831.2.camel@lb-tlvb-eilong.il.broadcom.com>
Date:	Fri, 10 Sep 2010 14:16:14 +0300
From:	"Eilon Greenstein" <eilong@...adcom.com>
To:	"David Miller" <davem@...emloft.net>,
	"Rick Jones" <rick.jones2@...com>
cc:	"ole@....pl" <ole@....pl>,
	"eric.dumazet@...il.com" <eric.dumazet@...il.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [RFC] bnx2x: Insane RX rings

On Thu, 2010-09-09 at 14:38 -0700, Rick Jones wrote:
> David Miller wrote:
> > From: Krzysztof Olędzki <ole@....pl>
> > Date: Thu, 09 Sep 2010 23:21:01 +0200
> > 
> > 
> >>On 2010-09-09 22:45, Eric Dumazet wrote:
> >>
> >>>Problem is : With 16 RX queues per device , thats 4078*16*2Kbytes per
> >>>ethernet port.
> >>>
> >>>Total :
> >>>
> >>>skbuff_head_cache 130747 131025 256 15 1 : tunables 120 60 8 :
> >>>slabdata 8735 8735 40
> >>>size-2048 130866 130888 2048 2 1 : tunables 24 12 8 : slabdata 65444
> >>>65444 28
> >>>
> >>>Thats about 300 Mbytes of memory, just in case some network trafic
> >>>will occur.
> >>>
> >>>Lets do something about that ?
> >>
> >>Yep, it is ~8MB per queue, not so much alone, but a lot together. For
> >>this reason I use something like bnx2.num_queues=2 on servers where I
> >>don't need much CPU power for network workload.
> > 
> > 
> > I think simply that the RX queue size should be scaled by the number
> > of queues we have.

There are few factors that can be considered when scaling the ring
sizes:
- Number of queues per device
- Number of devices
- Available amount of memory
- Others...

I'm thinking about adding a factor only according to the number of
queues - this will still cause issues for systems with many ports. Does
that sound reasonable or not enough? Do you think the number of devices
or even the amount of free memory should be considered?

Thanks,
Eilon

> > If people want enormous RX ring sizes even when there are many queues,
> > they can use ethtool to get that.
> > 
> > Taking up 130MB of memory per-card, just for RX packet buffers, is
> > certainly over the top.
> 
> It gets even better if one consideres JumboFrames...  that said, I've had 
> customer contacts (indirect) where they were quite keep to have a ring size of 
> at least 2048 packets - I never could get it confirmed, but I suspect they had 
> applications/systems that might "go out to lunch" for long-enough periods of 
> time they wanted that degree of FIFO.
> 
> Doesn't necessarily change "what should be the defaults" much but there it is.
> 
> rick jones
> 



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ