[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BF056F0.8010008@ans.pl>
Date: Sun, 16 May 2010 22:34:56 +0200
From: Krzysztof Olędzki <ole@....pl>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: Michael Chan <mchan@...adcom.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: bnx2/BCM5709: why 5 interrupts on a 4 core system (2.6.33.3)
On 2010-05-16 22:15, Eric Dumazet wrote:
> Le dimanche 16 mai 2010 à 13:00 -0700, Michael Chan a écrit :
>> Krzysztof Oledzki wrote:
>>
>>> On 2010-05-16 20:51, Michael Chan wrote:
>>>> Krzysztof Oledzki wrote:
>>>>
>>>>>
>>>>> Why the driver registers 5 interrupts instead of 4? How to
>>>>> limit it to 4?
>>>>>
>>>>
>>>> The first vector (eth0-0) handles link interrupt and other slow
>>>> path events. It also has an RX ring for non-IP packets that are
>>>> not hashed by the RSS hash. The majority of the rx packets should
>>>> be hashed to the rx rings eth0-1 - eth0-4, so I would assign these
>>>> vectors to different CPUs.
>>>
>>> Thank you for your prompt response.
>>>
>>> In my case the first vector must be handling something more:
>>> - "ping -f 192.168.0.1" increases interrupts on both eth1-0
>>> and eth1-4
>>> - "ping -f 192.168.0.2" increases interrupts on both eth1-0
>>> and eth1-3
>>> - "ping -f 192.168.0.3" increases interrupts on both eth1-0
>>> and eth1-1
>>> - "ping -f 192.168.0.7" increases interrupts on both eth1-0
>>> and eth1-2
>>>
>>> CPU0 CPU1 CPU2 CPU3
>>> 67: 1563979 0 0 0
>>> PCI-MSI-edge eth1-0
>>> 68: 1072869 0 0 0
>>> PCI-MSI-edge eth1-1
>>> 69: 137905 0 0 0
>>> PCI-MSI-edge eth1-2
>>> 70: 259246 0 0 0
>>> PCI-MSI-edge eth1-3
>>> 71: 760252 0 0 0
>>> PCI-MSI-edge eth1-4
>>>
>>> As you can see, eth1-1 + eth1-2 + eth1-3 + eth1-4 ~= eth1-0.
>>
>> I think that ICMP ping packets will always go to ring 0 (eth1-0)
>> because they are non-IP packets. I need to double check tomorrow
>> on how exactly the hashing works on RX. Can you try running IP
>> traffic? IP packets should theoretically go to rings 1 - 4.
>>
>
> ICMP packets are IP packets (Protocol=1)
Exactly. However, the firmware may handle ICMP and TCP in a different way.
>>> So, it seems that TX or RX is always handled by the first vector.
>>> I'll try to find if it is TX or RX.
>>>
>>> BTW: I'm using .1Q vlans over bonding, does it change anything?
>>
>> That should not matter, as the VLAN tag is stripped before hashing.
>
> warning, bonding currently is not multiqueue aware.
>
> All tx packets through bonding will use txqueue 0, since bnx2 doesnt
> provide a ndo_select_queue() function.
OK, that explains everything. Thank you Eric. I assume it may take some
time for bonding to become multiqueue aware and/or bnx2x to provide
ndo_select_queue?
BTW: With a normal router workload, should I expect big performance drop
when receiving and forwarding the same packet using different CPUs?
Bonding provides very important functionality, I'm not able to drop it. :(
Best regards,
Krzysztof Olędzki
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists