lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 8 Jan 2009 23:20:39 +0100
From:	Willy Tarreau <w@....eu>
To:	David Miller <davem@...emloft.net>
Cc:	ben@...s.com, jarkao2@...il.com, mingo@...e.hu,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	jens.axboe@...cle.com
Subject: Re: [PATCH] tcp: splice as many packets as possible at once

On Thu, Jan 08, 2009 at 01:55:15PM -0800, David Miller wrote:
> From: Ben Mansell <ben@...s.com>
> Date: Thu, 08 Jan 2009 21:50:44 +0000
> 
> > > From fafe76713523c8e9767805cfdc7b73323d7bf180 Mon Sep 17 00:00:00 2001
> > > From: Willy Tarreau <w@....eu>
> > > Date: Thu, 8 Jan 2009 17:10:13 +0100
> > > Subject: [PATCH] tcp: splice as many packets as possible at once
> > > Currently, in non-blocking mode, tcp_splice_read() returns after
> > > splicing one segment regardless of the len argument. This results
> > > in low performance and very high overhead due to syscall rate when
> > > splicing from interfaces which do not support LRO.
> > > The fix simply consists in not breaking out of the loop after the
> > > first read. That way, we can read up to the size requested by the
> > > caller and still return when there is no data left.
> > > Performance has significantly improved with this fix, with the
> > > number of calls to splice() divided by about 20, and CPU usage
> > > dropped from 100% to 75%.
> > > 
> > 
> > I get similar results with my testing here. Benchmarking an application with this patch shows that more than one packet is being splice()d in at once, as a result I see a doubling in throughput.
> > 
> > Tested-by: Ben Mansell <ben@...s.com>
> 
> I'm not applying this until someone explains to me why
> we should remove this test from the splice receive but
> keep it in the tcp_recvmsg() code where it has been
> essentially forever.

In my opinion, the code structure is different between both functions. In
tcp_recvmsg(), we test for it if (copied > 0), where copied is the sum of
all data which have been processed since the entry in the function. If we
removed the test here, we could not break out of the loop once we have
copied something. In tcp_splice_read(), the test is still present in the
(!ret) code path, where ret is the last number of bytes processed, so the
test is still performed regardless of what has been previously transferred.

So in summary, in tcp_splice_read without this test, we get back to the
top of the loop, and if __tcp_splice_read() returns 0, then we break out
of the loop.

I don't know if my explanation is clear or not, it's easier to follow the
loops in front of the code :-/

Willy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ