[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4975D518.1000800@zeus.com>
Date: Tue, 20 Jan 2009 13:43:52 +0000
From: Ben Mansell <ben@...s.com>
To: Evgeniy Polyakov <zbr@...emap.net>
CC: Willy Tarreau <w@....eu>, David Miller <davem@...emloft.net>,
herbert@...dor.apana.org.au, jarkao2@...il.com,
dada1@...mosbay.com, mingo@...e.hu, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org, jens.axboe@...cle.com
Subject: Re: [PATCH] tcp: splice as many packets as possible at once
Evgeniy Polyakov wrote:
> On Tue, Jan 20, 2009 at 12:01:13PM +0000, Ben Mansell (ben@...s.com) wrote:
>> I've also tested on the same three patches (against 2.6.27.2 here), and
>> the patches appear to work just fine. I'm running a similar proxy
>> benchmark test to Willy, on a machine with 4 gigabit NICs (2xtg3,
>> 2xforcedeth). splice is working OK now, although I get identical results
>> when using splice() or read()/write(): 2.4 Gbps at 100% CPU (2% user,
>> 98% system).
>
> With small MTU or when driver does not support fragmented allocation
> (iirc at least forcedeth does not) skb will contain all the data in the
> linear part and thus will be copied in the kernel. read()/write() does
> effectively the same, but in userspace.
> This should only affect splice usage which involves socket->pipe data
> transfer.
I'll try with some larger MTUs and see if that helps - it should also
give an improvement if I'm hitting a limit on the number of
packets/second that the cards can process, regardless of splice...
>> I may be hitting a h/w limitation which prevents any higher throughput,
>> but I'm a little surprised that splice() didn't use less CPU time.
>> Anyway, the splice code is working which is the important part!
>
> Does splice without patches (but with performance improvement for
> non-blocking splice) has the same performance? It does not copy data,
> but you may hit the data corruption? If performance is the same, this
> maybe indeed HW limitation.
With an unpatched kernel, the splice performance was worse (due to the
one packet per-splice issues). With the small patch to fix that, I was
getting around 2 Gbps performance, although oddly enough, I could only
get 2 Gbps with read()/write() then as well...
I'll try and do some tests on a machine that hopefully doesn't have the
bottlenecks (and one that uses different NICs)
Ben
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists