[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090120121115.GA17277@ioremap.net>
Date: Tue, 20 Jan 2009 15:11:15 +0300
From: Evgeniy Polyakov <zbr@...emap.net>
To: Ben Mansell <ben@...s.com>
Cc: Willy Tarreau <w@....eu>, David Miller <davem@...emloft.net>,
herbert@...dor.apana.org.au, jarkao2@...il.com,
dada1@...mosbay.com, mingo@...e.hu, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org, jens.axboe@...cle.com
Subject: Re: [PATCH] tcp: splice as many packets as possible at once
On Tue, Jan 20, 2009 at 12:01:13PM +0000, Ben Mansell (ben@...s.com) wrote:
> I've also tested on the same three patches (against 2.6.27.2 here), and
> the patches appear to work just fine. I'm running a similar proxy
> benchmark test to Willy, on a machine with 4 gigabit NICs (2xtg3,
> 2xforcedeth). splice is working OK now, although I get identical results
> when using splice() or read()/write(): 2.4 Gbps at 100% CPU (2% user,
> 98% system).
With small MTU or when driver does not support fragmented allocation
(iirc at least forcedeth does not) skb will contain all the data in the
linear part and thus will be copied in the kernel. read()/write() does
effectively the same, but in userspace.
This should only affect splice usage which involves socket->pipe data
transfer.
> I may be hitting a h/w limitation which prevents any higher throughput,
> but I'm a little surprised that splice() didn't use less CPU time.
> Anyway, the splice code is working which is the important part!
Does splice without patches (but with performance improvement for
non-blocking splice) has the same performance? It does not copy data,
but you may hit the data corruption? If performance is the same, this
maybe indeed HW limitation.
--
Evgeniy Polyakov
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists