[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1259053673.2631.30.camel@ppwaskie-mobl2>
Date: Tue, 24 Nov 2009 01:07:53 -0800
From: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@...el.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: "robert@...julf.net" <robert@...julf.net>,
Jesper Dangaard Brouer <hawk@...u.dk>,
Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: ixgbe question
On Tue, 2009-11-24 at 00:46 -0700, Eric Dumazet wrote:
> Waskiewicz Jr, Peter P a écrit :
> > Ok, I was confused earlier. I thought you were saying that all packets
> > were headed into a single Rx queue. This is different.
> >
> > Do you know what version of irqbalance you're running, or if it's running
> > at all? We've seen issues with irqbalance where it won't recognize the
> > ethernet device if the driver has been reloaded. In that case, it won't
> > balance the interrupts at all. If the default affinity was set to one
> > CPU, then well, you're screwed.
> >
> > My suggestion in this case is after you reload ixgbe and start your tests,
> > see if it all goes to one CPU. If it does, then restart irqbalance
> > (service irqbalance restart - or just kill it and restart by hand). Then
> > start running your test, and in 10 seconds you should see the interrupts
> > move and spread out.
> >
> > Let me know if this helps,
>
> Sure it helps !
>
> I tried without irqbalance and with irqbalance (Ubuntu 9.10 ships irqbalance 0.55-4)
> I can see irqbalance setting smp_affinities to 5555 or AAAA with no direct effect.
>
> I do receive 16 different irqs, but all serviced on one cpu.
>
> Only way to have irqs on different cpus is to manualy force irq affinities to be exclusive
> (one bit set in the mask, not several ones), and that is not optimal for moderate loads.
>
> echo 1 >`echo /proc/irq/*/fiber1-TxRx-0/../smp_affinity`
> echo 1 >`echo /proc/irq/*/fiber1-TxRx-1/../smp_affinity`
> echo 4 >`echo /proc/irq/*/fiber1-TxRx-2/../smp_affinity`
> echo 4 >`echo /proc/irq/*/fiber1-TxRx-3/../smp_affinity`
> echo 10 >`echo /proc/irq/*/fiber1-TxRx-4/../smp_affinity`
> echo 10 >`echo /proc/irq/*/fiber1-TxRx-5/../smp_affinity`
> echo 40 >`echo /proc/irq/*/fiber1-TxRx-6/../smp_affinity`
> echo 40 >`echo /proc/irq/*/fiber1-TxRx-7/../smp_affinity`
> echo 100 >`echo /proc/irq/*/fiber1-TxRx-8/../smp_affinity`
> echo 100 >`echo /proc/irq/*/fiber1-TxRx-9/../smp_affinity`
> echo 400 >`echo /proc/irq/*/fiber1-TxRx-10/../smp_affinity`
> echo 400 >`echo /proc/irq/*/fiber1-TxRx-11/../smp_affinity`
> echo 1000 >`echo /proc/irq/*/fiber1-TxRx-12/../smp_affinity`
> echo 1000 >`echo /proc/irq/*/fiber1-TxRx-13/../smp_affinity`
> echo 4000 >`echo /proc/irq/*/fiber1-TxRx-14/../smp_affinity`
> echo 4000 >`echo /proc/irq/*/fiber1-TxRx-15/../smp_affinity`
>
>
> One other problem is that after reload of ixgbe driver, link is 95% of the time
> at 1 Gbps speed, and I could not find an easy way to force it being 10 Gbps
>
You might have this elsewhere, but it sounds like you're connecting back
to back with another 82599 NIC. Our optics in that NIC are dual-rate,
and the software mechanism that tries to "autoneg" link speed gets out
of sync easily in back-to-back setups.
If it's really annoying, and you're willing to run with a local patch to
disable the autotry mechanism, try this:
diff --git a/drivers/net/ixgbe/ixgbe_main.c
b/drivers/net/ixgbe/ixgbe_main.c
index a5036f7..62c0915 100644
--- a/drivers/net/ixgbe/ixgbe_main.c
+++ b/drivers/net/ixgbe/ixgbe_main.c
@@ -4670,6 +4670,10 @@ static void ixgbe_multispeed_fiber_task(struct
work_struct *work)
autoneg = hw->phy.autoneg_advertised;
if ((!autoneg) && (hw->mac.ops.get_link_capabilities))
hw->mac.ops.get_link_capabilities(hw, &autoneg,
&negotiation);
+
+ /* force 10G only */
+ autoneg = IXGBE_LINK_SPEED_10GB_FULL;
+
if (hw->mac.ops.setup_link)
hw->mac.ops.setup_link(hw, autoneg, negotiation, true);
adapter->flags |= IXGBE_FLAG_NEED_LINK_UPDATE;
Cheers,
-PJ
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists