lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJPywTKEFpc+dxCkPcJMcbkw2AeR_yWAEdt--JMXOBomffxOLg@mail.gmail.com>
Date:   Thu, 13 Dec 2018 14:33:45 +0100
From:   Marek Majkowski <marek@...udflare.com>
To:     eric.dumazet@...il.com
Cc:     netdev@...r.kernel.org
Subject: Re: splice() performance for TCP socket forwarding

Ok, 4.19 does seem to kinda fix the SO_RCVLOWAT with splice, but I
don't fully understand it:

fcntl(8, F_SETPIPE_SZ, 1048576)         = 1048576 <0.000033>
setsockopt(4, SOL_SOCKET, SO_RCVLOWAT, [131072], 4) = 0 <0.000014>
splice(4, NULL, 9, NULL, 1048576, SPLICE_F_MOVE) = 121435 <71.039385>
splice(8, NULL, 5, NULL, 121435, SPLICE_F_MOVE) = 121435 <0.000118>
splice(4, NULL, 9, NULL, 1048576, SPLICE_F_MOVE) = 11806 <0.000019>
splice(8, NULL, 5, NULL, 11806, SPLICE_F_MOVE) = 11806 <0.000018>

So, even though I requested 128KiB, the first splice returned 121KiB
and the second one 11KiB. The first one can be explained by
data+metadata crossing 128KiB threshold. I'm not sure about the second
splice.

On Thu, Dec 13, 2018 at 2:18 PM Marek Majkowski <marek@...udflare.com> wrote:
>
> On Thu, Dec 13, 2018 at 2:17 PM Marek Majkowski <marek@...udflare.com> wrote:
> >
> > Eric,
> >
> > On Thu, Dec 13, 2018 at 1:49 PM Eric Dumazet <eric.dumazet@...il.com> wrote:
> > > On 12/13/2018 03:25 AM, Marek Majkowski wrote:
> > > > Hi!
> > > >
> > > > I'm basically trying to do TCP splicing in Linux. I'm focusing on
> > > > performance of the simplest case: receive data from one TCP socket,
> > > > write data to another TCP socket. I get poor performance with splice.
> > > >
> > > > First, the naive code, pretty much:
> > > >
> > > > while(1){
> > > >  n = read(rs, buf);
> > > >  write(ws, buf, n);
> > > > }
> > > >
> > > > With GRO enabled, this code does roughly line-rate of 10Gbps, hovering
> > > > ~50% of CPU in application (sys mostly).
> > > >
> > > > When replaced with splice version:
> > > >
> > > > pipe(pfd);
> > > > fcntl(pfd[0], F_SETPIPE_SZ, 1024 * 1024);
> > >
> > > Why 1 MB ?
> > >
> > > splice code will be expensive if less than 1MB is present in receive queue.
> >
> > I'm not sure what you are suggesting. I'm just shuffling data between
> > two sockets. Is there a better buffer size value? Is it possible to
> > keep splice() blocked until it succeeds to forward N bytes of data? (I
> > tried this unsuccessfully with SO_RCVLOWAT).
>
> I jumped the gun here. Let me re-try SO_RCVLOWAT on 4.19.
>
> > Here is a snippet from strace:
> >
> > splice(4, NULL, 11, NULL, 1048576, 0) = 373760 <0.000048>
> > splice(10, NULL, 5, NULL, 373760, 0) = 373760 <0.000108>
> > splice(4, NULL, 11, NULL, 1048576, 0) = 335800 <0.000065>
> > splice(10, NULL, 5, NULL, 335800, 0) = 335800 <0.000202>
> > splice(4, NULL, 11, NULL, 1048576, 0) = 227760 <0.000029>
> > splice(10, NULL, 5, NULL, 227760, 0) = 227760 <0.000106>
> > splice(4, NULL, 11, NULL, 1048576, 0) = 16060 <0.000019>
> > splice(10, NULL, 5, NULL, 16060, 0) = 16060 <0.000028>
> > splice(4, NULL, 11, NULL, 1048576, 0) = 7300 <0.000013>
> > splice(10, NULL, 5, NULL, 7300, 0) = 7300 <0.000021>
> >
> > > > while(1) {
> > > >  n = splice(rd, NULL, pfd[1], NULL, 1024*1024,
> > > >                        SPLICE_F_MOVE);
> > > >   splice(pfd[0], NULL, wd, NULL, n, SPLICE_F_MOVE);
> > > > }
> > > >
> > > > Full code:
> > > > https://gist.github.com/majek/c58a97b9be7d9217fe3ebd6c1328faaa#file-proxy-splice-c-L59
> > > >
> > > > I get 100% cpu (sys) and dramatically worse performance (1.5x slower).
> > > >
> > > > naive run of perf record ./proxy-splice shows:
> > > >    5.73%  [k] queued_spin_lock_slowpath
> > > >    5.23%  [k] ipt_do_table
> > > >    4.72%  [k] __splice_segment.part.59
> > > >    4.72%  [k] do_tcp_sendpages
> > > >    3.47%  [k] _raw_spin_lock_bh
> > > >    3.36%  [k] __x86_indirect_thunk_rax
> > > >
> > > > (kernel 4.14.71)
> > > >
> > > > Is it possible to squeeze more from splice? Is it possible to force
> > > > splice() to hang forever and not return quickly (SO_RCVLOWAT doesn't
> > > > work).
> > >
> > > I believe it should work on recent linux kernels (4.18 )
> > >
> > > 03f45c883c6f391ed4fff8292415b35bd1107519 tcp: avoid extra wakeups for SO_RCVLOWAT users
> > > 796f82eafcd96629c2f9a0332dbb4f474854aaf8 tcp: fix delayed acks behavior for SO_RCVLOWAT
> > > d1361840f8c519eaee9a78ffe09e4f0a1b586846 tcp: fix SO_RCVLOWAT and RCVBUF autotuning
> >
> > I can confirm this. On 4.19 indeed splice program goes down to
> > expected ~50% cpu and performance comparable to naive read/write
> > version.
> >
> > > >
> > > > Is there another way of doing TCP splicing? I'm aware of TCP ZEROCOPY
> > > > that landed in 4.19.
> > > >
> > >
> > > TCP zero copy is only working if your MSS is exactly 4096 bytes (+ TCP options),
> > > so might be tricky, as it also requires NIC driver abilities to perform nice header splitting.
> >
> > Oh, that's a pity.
> >
> > Thanks for help.
> > Marek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ