[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87618083B2453E4A8714035B62D6799250524B10@FMSMSX105.amr.corp.intel.com>
Date: Fri, 5 Feb 2016 00:07:24 +0000
From: "Tantilov, Emil S" <emil.s.tantilov@...el.com>
To: Jay Vosburgh <jay.vosburgh@...onical.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"gospo@...ulusnetworks.com" <gospo@...ulusnetworks.com>,
zhuyj <zyjzyj2000@...il.com>,
"jiri@...lanox.com" <jiri@...lanox.com>
Subject: RE: bonding reports interface up with 0 Mbps
>-----Original Message-----
>From: Jay Vosburgh [mailto:jay.vosburgh@...onical.com]
>Sent: Thursday, February 04, 2016 12:30 PM
>To: Tantilov, Emil S
>Cc: netdev@...r.kernel.org; gospo@...ulusnetworks.com; zhuyj;
>jiri@...lanox.com
>Subject: Re: bonding reports interface up with 0 Mbps
>
>Tantilov, Emil S <emil.s.tantilov@...el.com> wrote:
>
>>We are seeing an occasional issue where the bonding driver may report
>interface up with 0 Mbps:
>>bond0: link status definitely up for interface eth0, 0 Mbps full duplex
>>
>>So far in all the failed traces I have collected this happens on
>NETDEV_CHANGELOWERSTATE event:
>>
>><...>-20533 [000] .... 81811.041241: ixgbe_service_task: eth1: NIC Link is
>Up 10 Gbps, Flow Control: RX/TX
>><...>-20533 [000] .... 81811.041257: ixgbe_check_vf_rate_limit <-
>ixgbe_service_task
>><...>-20533 [000] .... 81811.041272: ixgbe_ping_all_vfs <-
>ixgbe_service_task
>>kworker/u48:0-7503 [010] .... 81811.041345: ixgbe_get_stats64 <-
>dev_get_stats
>>kworker/u48:0-7503 [010] .... 81811.041393: bond_netdev_event: eth1:
>event: 1b
>>kworker/u48:0-7503 [010] .... 81811.041394: bond_netdev_event: eth1:
>IFF_SLAVE
>>kworker/u48:0-7503 [010] .... 81811.041395: bond_netdev_event: eth1:
>slave->speed = ffffffff
>><...>-20533 [000] .... 81811.041407: ixgbe_ptp_overflow_check <-
>ixgbe_service_task
>>kworker/u48:0-7503 [010] .... 81811.041407: bond_mii_monitor: bond0: link
>status definitely up for interface eth1, 0 Mbps full duplex
>
> Thinking about the trace again... Emil: what happens in the
>trace before this? Is there ever a call to the ixgbe_get_settings?
>Does a NETDEV_UP or NETDEV_CHANGE event ever hit the bond_netdev_event
>function?
Yes, there are calls to ixgbe_get_settings, but the interface is still
down at that time. I managed to trim the ftrace filters down to where
the trace comes out at a decent size, also added some additional debugging
for link_state and slave->link in bond_miimon_inspect() - see attached file.
>
> Could you describe your test that reproduces this? I'd like to
>see if I can set it up locally.
It is basically an up/down of the bonding interface:
ifdown bond0
ifup eth0
ifup eth1
<wait for link>
<check dmesg>
in a loop
ifdown bond0 brings bond0, eth0/1 down, ifup eth0/1 brings the ixgbe
interfaces up which kicks the bond0 interface up as well.
Thanks,
Emil
Download attachment "trace_0_mbps_4.log" of type "application/octet-stream" (237054 bytes)
Powered by blists - more mailing lists