lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 09 Oct 2007 09:43:11 -0400
From:	jamal <hadi@...erus.ca>
To:	Herbert Xu <herbert@...dor.apana.org.au>
Cc:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
	kaber@...sh.net, dada1@...mosbay.com, johnpol@....mipt.ru
Subject: Re: [RFC][NET_SCHED] explict hold dev tx lock

On Tue, 2007-09-10 at 12:00 +0800, Herbert Xu wrote:

> 
> OK, after waking up a bit more 

me too;-> 

> What I'm worried about is would we see worse behaviour with
> drivers that do all their TX clean-up with the TX lock held

Good point Herbert.
When i looked around i only found one driver that behaved like that;
some IBM mainframe one from the looks of it. That driver did a lot of
other obscure things (i think they maintain their own napi calls etc),
so it didnt worry me very much. 
IIRC, I think it could be fixed to do what tg3 and relatives like bnx do
(I really like the approach) - just didnt have the time to chase it. 
There are a _lot_ more drivers that have no respect for netif_tx_lock
and implement their own locking down in the driver. Those already suffer
from the phenomena you describe whether TX_LOCK is held or not.

> (which would cause qdisc_restart to spin while this is happening)?

Yes, with such a driver, we spin in the worst case. But that provides
opportunities for optimization of a driver behaving that way; two
approaches off top of my head:
a) we could prune the tx descriptors on the the tx path since it is safe
to do so and reduce the amount of time spent doing such work in the napi
poll
or
b) Have the napi side do a trylock (sort of what the e1000 attempts to
do) and reschedule the poll to retry.

#b is fair because the cost of a queue_lock goes up as the number of
cpus goes up (which is the case i was optimizing for). The cost of tx
lock is contention by two cpus in worst case. Thoughts?

cheers,
jamal

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ