lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87618083B2453E4A8714035B62D67992505210DB@FMSMSX105.amr.corp.intel.com>
Date:	Mon, 1 Feb 2016 15:53:33 +0000
From:	"Tantilov, Emil S" <emil.s.tantilov@...el.com>
To:	"zyjzyj2000@...il.com" <zyjzyj2000@...il.com>,
	"Schmitt, Phillip J" <phillip.j.schmitt@...el.com>,
	"Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"e1000-devel@...ts.sourceforge.net" 
	<e1000-devel@...ts.sourceforge.net>,
	"Shteinbock, Boris (Wind River)" <boris.shteinbock@...driver.com>
Subject: RE: ixgbe: get link speed as a slave nic unrelated with link 

>-----Original Message-----
>From: zyjzyj2000@...il.com [mailto:zyjzyj2000@...il.com]
>Sent: Sunday, January 31, 2016 11:28 PM
>To: zyjzyj2000@...il.com; Tantilov, Emil S; Schmitt, Phillip J; Kirsher,
>Jeffrey T; netdev@...r.kernel.org; e1000-devel@...ts.sourceforge.net;
>Shteinbock, Boris (Wind River)
>Subject: ixgbe: get link speed as a slave nic unrelated with link
>
>
>Hi, Emil
>
>Thanks for your patch.
>After I applied your patch, the following are the feedback from my users.
>
>"
>Users had tested the latest patch that you provided and it is much improved
>now. However it’s still not good enough as the users are planning field
>deployment. Here are their findings:
>
>So close, but not quite 100%. I did run over 2500 re-negotiations on one
>interface of a bonded pair and got the 0 MBps status total of three times.
>The longest run without single error was something like 1800 re-
>negotiations or so. So, this version seems to improve the situation
>immensely (the unpatched driver fails like 25% of the time), but there
>still seems to remain some tiny race somewhere.

Yes at the time of the bonding interface coming up there can be a message about 0 Mbps in dmesg,
however the actual bond once fully up will have the correct speeds as seen by:
#cat /proc/net/bonding/bond0

Thanks,
Emil


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ