[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1309279892.3559.6.camel@localhost.localdomain>
Date: Tue, 28 Jun 2011 09:51:32 -0700
From: Shirley Ma <mashirle@...ibm.com>
To: David Miller <davem@...emloft.net>
Cc: mst@...hat.com, eric.dumazet@...il.com, avi@...hat.com,
arnd@...db.de, netdev@...r.kernel.org, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH V7 2/4 net-next] skbuff: Add userspace zero-copy
buffers in skb
On Mon, 2011-06-27 at 15:54 -0700, David Miller wrote:
> From: Shirley Ma <mashirle@...ibm.com>
> Date: Mon, 27 Jun 2011 08:45:10 -0700
>
> > To support skb zero-copy, a pointer is needed to add to skb share
> info.
> > Do you agree with this approach? If not, do you have any other
> > suggestions?
>
> I really can't form an opinion unless I am shown the complete
> implementation, what this give us in return, what the impact is, etc.
zero-copy skb buffers can save significant CPUs. Right now, I only
implements macvtap/vhost zero-copy between KVM guest and host. The
performance is as follow:
Single TCP_STREAM 120 secs test results 2.6.39-rc3 over ixgbe 10Gb NIC
results:
Message BW(Gb/s)qemu-kvm (NumCPU)vhost-net(NumCPU) PerfTop irq/s
4K 7408.57 92.1% 22.6% 1229
4K(Orig)4913.17 118.1% 84.1% 2086
8K 9129.90 89.3% 23.3% 1141
8K(Orig)7094.55 115.9% 84.7% 2157
16K 9178.81 89.1% 23.3% 1139
16K(Orig)8927.1 118.7% 83.4% 2262
64K 9171.43 88.4% 24.9% 1253
64K(Orig)9085.85 115.9% 82.4% 2229
You can see the overall CPU saved 50% w/i zero-copy.
The impact is every skb allocation consumed one more pointer in skb
share info, and a pointer check in skb release when last reference is
gone.
For skb clone, skb expand private head and skb copy, it still keeps copy
the buffers to kernel, so we can avoid user application, like tcpdump to
hold the user-space buffers too long.
Thanks
Shirley
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists