lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 13 Nov 2009 08:15:53 -0800
From:	Stephen Hemminger <shemminger@...tta.com>
To:	Changli Gao <xiaosuo@...il.com>
Cc:	Jarek Poplawski <jarkao2@...il.com>,
	Eric Dumazet <eric.dumazet@...il.com>,
	"David S. Miller" <davem@...emloft.net>,
	Patrick McHardy <kaber@...sh.net>,
	Tom Herbert <therbert@...gle.com>, netdev@...r.kernel.org
Subject: Re: [PATCH] ifb: add multi-queue support

On Fri, 13 Nov 2009 17:38:56 +0800
Changli Gao <xiaosuo@...il.com> wrote:

> On Fri, Nov 13, 2009 at 5:18 PM, Jarek Poplawski <jarkao2@...il.com> wrote:
> > On Fri, Nov 13, 2009 at 04:54:50PM +0800, Changli Gao wrote:
> >> On Fri, Nov 13, 2009 at 3:45 PM, Jarek Poplawski <jarkao2@...il.com> wrote:
> >>
> >> I have done a simple test. I run a simple program on computer A, which
> >> sends SYN packets with random source ports to Computer B's 80 port (No
> >> socket listens on that port, so tcp reset packets will be sent) in
> >> 90kpps. On computer B, I redirect the traffic to IFB. At the same
> >> time, I ping from B to A to get the RTT between them. I can't see any
> >> difference between the original IFB and my MQ version. They are both:
> >>
> >> CPU idle: 50%
> >> Latency: 0.3-0.4ms, burst 2ms.
> >>
> >
> > I'm mostly concerned with routers doing forwarding with 1Gb or 10Gb
> > NICs (including multiqueue). Alas/happily I don't have such a problem,
> > but can't help you with testing either.
> >
> 
> Oh, :) . I know more than one companies use kernel threads to forward
> packets, and there isn't explicit extra overhead at all. And as you
> know, as throughput increases, NAPI will bind the NIC to a CPU, and
> softirqd will be waked up to do the work, which should be done in
> SoftIRQ context. At that time, there isn't any difference between my
> approach and the current kernel's.
> 
> 

Why not make IFB a NAPI device. This would get rid of the extra soft-irq
round trip from going through netif_rx().  It would also behave like
regular multi-queue recieive device, and eliminate need for seperate
tasklets or threads.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ