lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 15 Jan 2008 13:53:43 -0800
From:	"Brandeburg, Jesse" <jesse.brandeburg@...el.com>
To:	<slavon@...telecom.ru>, "Frans Pop" <elendil@...net.nl>
Cc:	"David Miller" <davem@...emloft.net>, <netdev@...r.kernel.org>,
	<linux-kernel@...r.kernel.org>
Subject: RE: [REGRESSION] 2.6.24-rc7: e1000: Detected Tx Unit Hang

slavon@...telecom.ru wrote:
> Quoting Frans Pop <elendil@...net.nl>:
>>> (Note this isn't the final correct patch we should apply.  There  is
>>> no reason why this revert back to the older ->poll() logic  here
>>> should have any effect on the TX hang triggering...)
>> 
>> s/no reason/no obvious reason/ ? ;-)

The tx code has an "early exit" that tries to limit the amount of tx
packets handled in a single poll loop and requires napi or interrupt
rescheduling based on the return value from e1000_clean_tx_irq.

see this code in e1000_clean_tx_irq

4005 #ifdef CONFIG_E1000_NAPI
4006 #define E1000_TX_WEIGHT 64
4007 >       >       /* weight of a sort for tx, to avoid endless
transmit cleanup */
4008 >       >       if (count++ == E1000_TX_WEIGHT) break;
4009 #endif

I think that is probably related.  For a test you could apply the
original patch, and remove this "break" just by commenting out line
4008.  This would guarantee all tx work is cleaned at every e1000_clean

Jesse
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ