lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080923062333.GA26595@gondor.apana.org.au>
Date:	Tue, 23 Sep 2008 14:23:33 +0800
From:	Herbert Xu <herbert@...dor.apana.org.au>
To:	David Miller <davem@...emloft.net>
Cc:	jarkao2@...il.com, netdev@...r.kernel.org, kaber@...sh.net,
	alexander.h.duyck@...el.com
Subject: Re: [PATCH take 2] pkt_sched: Fix qdisc_watchdog() vs.
	dev_deactivate() race

On Sun, Sep 21, 2008 at 12:03:01AM -0700, David Miller wrote:
>
> This works if you want it at the root, but what if you only wanted to
> prio at a leaf?  I think that case has value too.

Good question :)

I think what we should do is to pass some token that rerepsents
the TX queue that's being run down into the dequeue function.

Then each qdisc can decide which child to recursively dequeue
based on that token (or ignore it for non-prio qdiscs such as
HTB).  When the token reaches the leaf then we have two cases:

1) A prio-like qdisc that has separate queues based on priorities.
In this case we dequeue the respective queue based on the token.

2) Any other qdisc.  We dequeue the first packet that hashes
into the queue given by the token.  Ideally these qdiscs should
have separate queues already so that this would be trivial.

> I tend to also disagree with another mentioned assertion.  The one
> where having a shared qdisc sucks on SMP.  It doesn't.  The TX queue
> lock is held much longer than the qdisc lock.

Yes I was exaggerating :)

However, after answering your question above I'm even more convinced
that we should be separating the traffic at the point of enqueueing,
and not after we dequeue it in qdisc_run.

The only reason to do the separation after dequeueing would be to
allow the TX queue selction to change in the time being.  However,
since I see absolutely no reason why we'd need that, it's just so
much simpler to separate them at qdisc_enqueue, and actually have
the same number of software queues as there are hardware queues.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ