lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 25 Jul 2009 11:24:36 +0800
From:	Herbert Xu <herbert@...dor.apana.org.au>
To:	Krishna Kumar2 <krkumar2@...ibm.com>
Cc:	davem@...emloft.net, Jarek Poplawski <jarkao2@...il.com>,
	netdev@...r.kernel.org
Subject: Re: [RFC] [PATCH] Don't run __qdisc_run() on a stopped TX queue

On Fri, Jul 24, 2009 at 04:01:15PM +0530, Krishna Kumar2 wrote:
>
> Assuming many CPU's share a queue, only one can xmit due to the
> RUNNING bit. And after RUNNING bit is taken, no other cpu can
> stop the queue. So the only change in the behavior with this
> patch is that the xmit is terminated a little earlier compared
> to the current code. In case of a stopped queue, the patch helps
> a little bit more by removing one stopped check for each queue'd
> skb, including those skbs that are added later while the current
> xmit session (qdisc_run) is ongoing.
> 
> I hope I have addressed your concern?

That got me to actually look at your patch :)

You're essentiall reverting f4ab543201992fe499bef5c406e09f23aa97b4d5
which cannot be right since the same problem still exists.

However, I am definitely with you in that we should perform this
optimisation since it makes sense for the majority of people who
use multiqueue TX.

So the fact that our current architecture penalises the people
who actually need multiqueue TX in order to ensure correctness
for the people who cannot use multiqueue TX effectively (i.e.,
those who use non-default qdiscs) makes me uneasy.

Dave, remember our discussion about the benefits of using multiqueue
TX just for the sake of enarlging the TX queues? How about just
going back to using a single queue for non-default qdiscs (at
least until such a time when non-default qdiscs start doing
multiple queues internally)?

Yes it would mean potentially smaller queues for those non-default
qdisc users, but they're usually the same people who want the
hardware to queue as little as possible in order to enforce whatever
it is that their qdisc is designed to enforce.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ