lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090225063656.GA32635@gondor.apana.org.au>
Date:	Wed, 25 Feb 2009 14:36:56 +0800
From:	Herbert Xu <herbert@...dor.apana.org.au>
To:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
Cc:	netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
	jesse.brandeburg@...el.com
Subject: Re: [RFC v1] hand off skb list to other cpu to submit to upper
	layer

Zhang, Yanmin <yanmin_zhang@...ux.intel.com> wrote:
> Subject: hand off skb list to other cpu to submit to upper layer
> From: Zhang Yanmin <yanmin.zhang@...ux.intel.com>
> 
> Recently, I am investigating an ip_forward performance issue with 10G IXGBE NIC.
> I start the testing on 2 machines. Every machine has 2 10G NICs. The 1st one seconds
> packets by pktgen. The 2nd receives the packets from one NIC and forwards them out
> from the 2nd NIC. As NICs supports multi-queue, I bind the queues to different logical
> cpu of different physical cpu while considering cache sharing carefully.
> 
> Comparing with sending speed on the 1st machine, the forward speed is not good, only
> about 60% of sending speed. As a matter of fact, IXGBE driver starts NAPI when interrupt
> arrives. When ip_forward=1, receiver collects a packet and forwards it out immediately.
> So although IXGBE collects packets with NAPI, the forwarding really has much impact on
> collection. As IXGBE runs very fast, it drops packets quickly. The better way for
> receiving cpu is doing nothing than just collecting packets.

This doesn't make sense.  With multiqueue RX, every core should be
working to receive its fraction of the traffic and forwarding them
out.  So you shouldn't have any idle cores to begin with.  The fact
that you do means that multiqueue RX hasn't maximised its utility,
so you should tackle that instead of trying redirect traffic away
from the cores that are receiving.

Of course for NICs that don't support multiqueue RX, or where the
number of RX queues is less than the number of cores, then a scheme
like yours may be useful.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ