lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1236220827.2567.136.camel@ymzhang>
Date:	Thu, 05 Mar 2009 10:40:27 +0800
From:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To:	David Miller <davem@...emloft.net>
Cc:	herbert@...dor.apana.org.au, netdev@...r.kernel.org,
	linux-kernel@...r.kernel.org, jesse.brandeburg@...el.com,
	shemminger@...tta.com
Subject: Re: [RFC v1] hand off skb list to other cpu to submit to upper
	layer

On Thu, 2009-03-05 at 09:04 +0800, Zhang, Yanmin wrote:
> On Wed, 2009-03-04 at 01:39 -0800, David Miller wrote:
> > From: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
> > Date: Wed, 04 Mar 2009 17:27:48 +0800
> > 
> > > Both the new skb_record_rx_queue and current kernel have an
> > > assumption on multi-queue. The assumption is it's best to send out
> > > packets from the TX of the same number of queue like the one of RX
> > > if the receved packets are related to the out packets. Or more
> > > direct speaking is we need send packets on the same cpu on which we
> > > receive them. The start point is that could reduce skb and data
> > > cache miss.
> > 
> > We have to use the same TX queue for all packets for the same
> > connection flow (same src/dst IP address and ports) otherwise
> > we introduce reordering.
> > Herbert brought this up, now I have explicitly brought this up,
> > and you cannot ignore this issue.
> Thanks. Stephen Hemminger brought it up and explained what reorder
> is. I answered in a reply (sorry for not clear) that mostly we need spread
> packets among RX/TX in a 1:1 mapping or N:1 mapping. For example, all packets
> received from RX 8 will be spreaded to TX 0 always.
To make it clearer, I used 1:1 mapping binding when running testing
on bensley (4*2 cores) and Nehalem (2*4*2 logical cpu). So there is no reorder
issue. I also worked out a new patch on the failover path to just drop
packets when qlen is bigger than netdev_max_backlog, so the failover path wouldn't
cause reorder.

> 
> 
> > 
> > You must not knowingly reorder packets, and using different TX
> > queues for packets within the same flow does that.
> Thanks for you rexplanation which is really consistent with Stephen's speaking.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ