lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 4 Aug 2009 12:15:13 +0530
From:	Krishna Kumar2 <krkumar2@...ibm.com>
To:	Herbert Xu <herbert@...dor.apana.org.au>
Cc:	David Miller <davem@...emloft.net>, jarkao2@...il.com,
	kaber@...sh.net, netdev@...r.kernel.org
Subject: Re: [RFC] [PATCH] Avoid enqueuing skb for default qdiscs

Hi Herbert,

Herbert Xu <herbert@...dor.apana.org.au> wrote on 08/04/2009 09:19:10 AM:
> On Mon, Aug 03, 2009 at 08:29:35PM -0700, David Miller wrote:
> >
> > Although PFIFO is not work-conserving, isn't it important to retain
> > ordering?  What if higher priority packets are in the queue when we
> > enqueue?  This new bypass will send the wrong packet, won't it?
>
> The bypass only kicks in if the queue length is zero.
>
> > I'm beginning to think, if we want to make the default case go as fast
> > as possible, we should just bypass everything altogether.  The entire
> > qdisc layer, all of it.
>
> Can you be more specific? AFAICS he's already bypassing the qdisc
> layer when it can be done safely.
>
> > Special casing something that essentially is unused, is in a way
> > a waste of time.  If this bypass could be applied to some of the
> > complicated qdiscs, then it'd be worthwhile, but just for the
> > default which effectively makes it do nothing, I don't see that
> > value in it.
>
> I agree with this sentiment.  Essentially what this bypass does
> is to eliminate the enqueue + qdisc_restart + dequeue in the
> cases where it is safe.  So its value is entirely dependent on
> the cost of the code that is eliminated, which may not be that
> large for the default qdisc.
>
> Krishna, those netperf tests that you performed, were they done
> with a single TX queue or multiple TX queues? If the latter did
> you tune the system so that each netperf was bound to a single
> core which had its own dedicated TX queue?

I ran on a Chelsio multiple TX queue card but I ran the default
netperf without any tuning (irqbalance, irq binding, cpu binding
of netperf/netserver, sysctl, etc). Do you want me to try with a
specific tuning also?

thanks,

- KK

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ