lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1235529343.2604.499.camel@ymzhang>
Date:	Wed, 25 Feb 2009 10:35:43 +0800
From:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To:	Stephen Hemminger <shemminger@...tta.com>
Cc:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>, jesse.brandeburg@...el.com
Subject: Re: [RFC v1] hand off skb list to other cpu to submit to upper
	layer

On Tue, 2009-02-24 at 18:11 -0800, Stephen Hemminger wrote:
> On Wed, 25 Feb 2009 09:27:49 +0800
> "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com> wrote:
> 
> > Subject: hand off skb list to other cpu to submit to upper layer
> > From: Zhang Yanmin <yanmin.zhang@...ux.intel.com>
> > 
> > Recently, I am investigating an ip_forward performance issue with 10G IXGBE NIC.
> > I start the testing on 2 machines. Every machine has 2 10G NICs. The 1st one seconds
> > packets by pktgen. The 2nd receives the packets from one NIC and forwards them out
> > from the 2nd NIC. As NICs supports multi-queue, I bind the queues to different logical
> > cpu of different physical cpu while considering cache sharing carefully.
> > 
> > Comparing with sending speed on the 1st machine, the forward speed is not good, only
> > about 60% of sending speed. As a matter of fact, IXGBE driver starts NAPI when interrupt
> > arrives. When ip_forward=1, receiver collects a packet and forwards it out immediately.
> > So although IXGBE collects packets with NAPI, the forwarding really has much impact on
> > collection. As IXGBE runs very fast, it drops packets quickly. The better way for
> > receiving cpu is doing nothing than just collecting packets.
> > 
> > Currently kernel has backlog to support a similar capability, but process_backlog still
> > runs on the receiving cpu. I enhance backlog by adding a new input_pkt_alien_queue to
> > softnet_data. Receving cpu collects packets and link them into skb list, then delivers
> > the list to the input_pkt_alien_queue of other cpu. process_backlog picks up the skb list
> > from input_pkt_alien_queue when input_pkt_queue is empty.
> > 
> > NIC driver could use this capability like below step in NAPI RX cleanup function.
> > 1) Initiate a local var struct sk_buff_head skb_head;
> > 2) In the packet collection loop, just calls netif_rx_queue or __skb_queue_tail(skb_head, skb)
> > to add skb to the list;
> > 3) Before exiting, calls raise_netif_irq to submit the skb list to specific cpu.
> > 
> > Enlarge /proc/sys/net/core/netdev_max_backlog and netdev_budget before testing.
> > 
> > I tested my patch on top of 2.6.28.5. The improvement is about 43%.
> > 
> > Signed-off-by: Zhang Yanmin <yanmin.zhang@...ux.intel.com>
> > 
> > ---
Thanks for your comments.

> 
> You can't safely put packets on another CPU queue without adding a spinlock.
input_pkt_alien_queue is a struct sk_buff_head which has a spinlock. We use
that lock to protect the queue.

> And if you add the spinlock, you drop the performance back down for your
> device and all the other devices.
My testing shows 43% improvement. As multi-core machines are becoming
popular, we can allocate some core for packet collection only.

I use the spinlock carefully. The deliver cpu locks it only when input_pkt_queue
is empty, and just merges the list to input_pkt_queue. Later skb dequeue needn't
hold the spinlock. In the other hand, the original receving cpu dispatches a batch
of skb (64 packets with IXGBE default) when holding the lock once.

>  Also, you will end up reordering
> packets which hurts single stream TCP performance.
Would you like to elaborate the scenario? Does your speaking mean multi-queue
also hurts single stream TCP performance when we bind multi-queue(interrupt) to
different cpu?

> 
> Is this all because the hardware doesn't do MSI-X
IXGBE supports MSI-X and I enables it when testing.  The receiver has 16 multi-queue,
so 16 irq numbers. I bind 2 irq numbers per logical cpu of one physical cpu.

>  or are you testing only
> a single flow. 
What does a single flow mean here? One sender? I do start one sender for testing because
I couldn't get enough hardware.

In addition, my patch doesn't change old interface, so there would be no performance
hurt to old drivers.

yanmin


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ