lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090804034910.GA30871@gondor.apana.org.au>
Date:	Tue, 4 Aug 2009 11:49:10 +0800
From:	Herbert Xu <herbert@...dor.apana.org.au>
To:	David Miller <davem@...emloft.net>
Cc:	krkumar2@...ibm.com, jarkao2@...il.com, kaber@...sh.net,
	netdev@...r.kernel.org
Subject: Re: [RFC] [PATCH] Avoid enqueuing skb for default qdiscs

On Mon, Aug 03, 2009 at 08:29:35PM -0700, David Miller wrote:
>
> Although PFIFO is not work-conserving, isn't it important to retain
> ordering?  What if higher priority packets are in the queue when we
> enqueue?  This new bypass will send the wrong packet, won't it?

The bypass only kicks in if the queue length is zero.

> I'm beginning to think, if we want to make the default case go as fast
> as possible, we should just bypass everything altogether.  The entire
> qdisc layer, all of it.

Can you be more specific? AFAICS he's already bypassing the qdisc
layer when it can be done safely.

> Special casing something that essentially is unused, is in a way
> a waste of time.  If this bypass could be applied to some of the
> complicated qdiscs, then it'd be worthwhile, but just for the
> default which effectively makes it do nothing, I don't see that
> value in it.

I agree with this sentiment.  Essentially what this bypass does
is to eliminate the enqueue + qdisc_restart + dequeue in the
cases where it is safe.  So its value is entirely dependent on
the cost of the code that is eliminated, which may not be that
large for the default qdisc.

Krishna, those netperf tests that you performed, were they done
with a single TX queue or multiple TX queues? If the latter did
you tune the system so that each netperf was bound to a single
core which had its own dedicated TX queue?

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ