[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 09 Sep 2010 23:21:01 +0200
From: Krzysztof Olędzki <ole@....pl>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: netdev <netdev@...r.kernel.org>,
Eilon Greenstein <eilong@...adcom.com>
Subject: Re: [RFC] bnx2x: Insane RX rings
On 2010-09-09 22:45, Eric Dumazet wrote:
> So I have a small dev machine, 4GB of ram,
> a dual E5540 cpu (quad core, 2 threads per core),
> so a total of 16 threads.
>
> Two ethernet ports, eth0 and eth1,
>
> 02:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57711E 10Gigabit PCIe
> 02:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM57711E 10Gigabit PCIe
>
> bnx2x 0000:02:00.0: eth0: using MSI-X IRQs: sp 68 fp[0] 69 ... fp[15] 84
> bnx2x 0000:02:00.1: eth1: using MSI-X IRQs: sp 85 fp[0] 86 ... fp[15] 101
>
>
> Default configuration :
>
> ethtool -g eth0
> Ring parameters for eth0:
> Pre-set maximums:
> RX: 4078
> RX Mini: 0
> RX Jumbo: 0
> TX: 4078
> Current hardware settings:
> RX: 4078
> RX Mini: 0
> RX Jumbo: 0
> TX: 4078
>
> Problem is : With 16 RX queues per device , thats 4078*16*2Kbytes per
> ethernet port.
>
> Total :
>
> skbuff_head_cache 130747 131025 256 15 1 : tunables 120 60 8 : slabdata 8735 8735 40
> size-2048 130866 130888 2048 2 1 : tunables 24 12 8 : slabdata 65444 65444 28
>
> Thats about 300 Mbytes of memory, just in case some network trafic will occur.
>
> Lets do something about that ?
Yep, it is ~8MB per queue, not so much alone, but a lot together. For
this reason I use something like bnx2.num_queues=2 on servers where I
don't need much CPU power for network workload.
Best regards,
Krzysztof Olędzki
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists