lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <262CB373A6D1F14F9B81E82F74F77D5A46FF13B8@avmb2.qlogic.org>
Date:	Fri, 11 Jul 2014 10:42:31 +0000
From:	Shahed Shaikh <shahed.shaikh@...gic.com>
To:	Holger Kiehl <Holger.Kiehl@....de>
CC:	netdev <netdev@...r.kernel.org>,
	Dept-HSG Linux NIC Dev <Dept-HSGLinuxNICDev@...gic.com>
Subject: RE: qlcnic very high TX values, as of 3.13.x

> -----Original Message-----
> From: netdev-owner@...r.kernel.org [mailto:netdev-
> owner@...r.kernel.org] On Behalf Of Holger Kiehl
> Sent: Friday, July 11, 2014 2:43 PM
> To: netdev
> Subject: qlcnic very high TX values, as of 3.13.x
> 
> Hello,

Hi Holger,

> 
> upgrading from 3.10.x to the next stable series 3.14.x I noticed that ifconfig
> reports very high TX values. Taking the qlcnic source from
> 3.15.5 and compile it under 3.14.12, the problem remains. Going backwards
> always just copying the qlcnic source from the older kernels to the 3.14.12
> tree, I noticed that the 3.12.x kernel was the last version that does not
> generate those high TX values. So the problem started with the qlcnic driver
> in 3.13.x. However, comparing 3.13.x and 3.14.x the numbers go higher in
> 3.14.x much quicker. In 3.14.x I get TX values in Terabytes very quickly after
> boot. I once even got Petabyte values!

Can you please provide qlcnic driver version used in 3.12.x? 
Please use "ethtool -i ethX" to get this info.

Also, can you please provide driver statistics o/p with both kernels (3.12.x and 3.14.x). 
Please use "ethtool -S ethX" command to collect driver statistics.
This would tell us if QLogic adapter indeed sent huge traffic or it's a bug in Tx statistics as part of ifconfig command.

Thanks,
Shahed
> 
> Hardware is the following:
> 
>       HP ProLiant DL380 G7
>       2 x Intel Xeon X5690 (24 cores with hypertreading)
>       106 GByte Ram
>       1 x NC523SFP 10Gb 2-port Server Adapter Board Chip rev 0x54 (qlcnic)
>       1 x Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection
>       (ixgbe)
> 
> The qlcnic and ixgbe cards are bonded together in fault-tolerance
> (active-backup) mode. And even when I switch to the Intel card, after I get
> crazy TX values on qlcnic card, the TX vaules on this card still go up at a very
> quick rate. This only stops when I reset the card (reload the module). Also,
> there is no differnce if I compile the driver in or use it as module. There are
> no strange messages in /var/log/messages or dmesg. Here the output with
> the 3.13.x driver in
> 3.14.12 when system boots:
> 
>      [   18.229195] QLogic 1/10 GbE Converged/Intelligent Ethernet Driver
>      v5.3.52
>      [   18.229415] qlcnic 0000:1a:00.0: 2048KB memory map
>      [   18.854134] qlcnic 0000:1a:00.0: Default minidump capture mask 0x1f
>      [   19.602491] qlcnic 0000:1a:00.0: FW dump enabled
>      [   19.631257] qlcnic 0000:1a:00.0: Supports FW dump capability
>      [   19.667072] qlcnic 0000:1a:00.0: Driver v5.3.52, firmware v4.14.26
>      [   19.704279] qlcnic 0000:1a:00.0: Set 4 Tx rings
>      [   19.733001] qlcnic 0000:1a:00.0: Set 4 SDS rings
>      [   19.898808] qlcnic: 2c:27:d7:50:04:48: NC523SFP 10Gb 2-port Server
>      Adapter Board Chip rev 0x54
>      [   19.949325] qlcnic 0000:1a:00.0: irq 129 for MSI/MSI-X
>      [   19.949329] qlcnic 0000:1a:00.0: irq 130 for MSI/MSI-X
>      [   19.949333] qlcnic 0000:1a:00.0: irq 131 for MSI/MSI-X
>      [   19.949336] qlcnic 0000:1a:00.0: irq 132 for MSI/MSI-X
>      [   19.949340] qlcnic 0000:1a:00.0: irq 133 for MSI/MSI-X
>      [   19.949343] qlcnic 0000:1a:00.0: irq 134 for MSI/MSI-X
>      [   19.949347] qlcnic 0000:1a:00.0: irq 135 for MSI/MSI-X
>      [   19.949350] qlcnic 0000:1a:00.0: irq 136 for MSI/MSI-X
>      [   19.949369] qlcnic 0000:1a:00.0: using msi-x interrupts
>      [   19.982782] qlcnic 0000:1a:00.0: Set 4 Tx queues
>      [   20.055099] qlcnic 0000:1a:00.0: eth2: XGbE port initialized
>      [   20.090408] qlcnic 0000:1a:00.1: 2048KB memory map
>      [   20.179836] qlcnic 0000:1a:00.1: Default minidump capture mask 0x1f
>      [   20.217848] qlcnic 0000:1a:00.1: FW dump enabled
>      [   20.246979] qlcnic 0000:1a:00.1: Supports FW dump capability
>      [   20.282318] qlcnic 0000:1a:00.1: Driver v5.3.52, firmware v4.14.26
>      [   20.320238] qlcnic 0000:1a:00.1: Set 4 Tx rings
>      [   20.350038] qlcnic 0000:1a:00.1: Set 4 SDS rings
>      [   20.429714] qlcnic 0000:1a:00.1: irq 137 for MSI/MSI-X
>      [   20.429718] qlcnic 0000:1a:00.1: irq 138 for MSI/MSI-X
>      [   20.429722] qlcnic 0000:1a:00.1: irq 139 for MSI/MSI-X
>      [   20.429726] qlcnic 0000:1a:00.1: irq 140 for MSI/MSI-X
>      [   20.429729] qlcnic 0000:1a:00.1: irq 141 for MSI/MSI-X
>      [   20.429732] qlcnic 0000:1a:00.1: irq 142 for MSI/MSI-X
>      [   20.429736] qlcnic 0000:1a:00.1: irq 143 for MSI/MSI-X
>      [   20.429739] qlcnic 0000:1a:00.1: irq 144 for MSI/MSI-X
>      [   20.429757] qlcnic 0000:1a:00.1: using msi-x interrupts
>      [   20.458895] qlcnic 0000:1a:00.1: Set 4 Tx queues
>      [   20.486907] qlcnic 0000:1a:00.1: eth3: XGbE port initialized
> 
> My kernel config can be downloaded here:
> 
>     ftp://ftp.dwd.de/pub/afd/test/.config
> 
> Please, just ask if I need to provide more details and please CC me, since I am
> not on the list.
> 
> Thanks,
> Holger
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in the body
> of a message to majordomo@...r.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ