lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iLU0BWxWrh1a3cfh+vOhRuyU5UJ8d5oD7ZW_GLfkMtvAQ@mail.gmail.com>
Date: Mon, 19 Jun 2023 12:59:18 +0200
From: Eric Dumazet <edumazet@...gle.com>
To: Pavel Begunkov <asml.silence@...il.com>
Cc: Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org, davem@...emloft.net, 
	dsahern@...nel.org, kuba@...nel.org
Subject: Re: [PATCH net-next 1/2] net/tcp: optimise locking for blocking splice

On Mon, Jun 19, 2023 at 11:27 AM Pavel Begunkov <asml.silence@...il.com> wrote:
>
> On 5/24/23 13:51, Pavel Begunkov wrote:
> > On 5/23/23 14:52, Paolo Abeni wrote:
> >> On Fri, 2023-05-19 at 14:33 +0100, Pavel Begunkov wrote:
> >>> Even when tcp_splice_read() reads all it was asked for, for blocking
> >>> sockets it'll release and immediately regrab the socket lock, loop
> >>> around and break on the while check.
> >>>
> >>> Check tss.len right after we adjust it, and return if we're done.
> >>> That saves us one release_sock(); lock_sock(); pair per successful
> >>> blocking splice read.
> >>>
> >>> Signed-off-by: Pavel Begunkov <asml.silence@...il.com>
> >>> ---
> >>>   net/ipv4/tcp.c | 8 +++++---
> >>>   1 file changed, 5 insertions(+), 3 deletions(-)
> >>>
> >>> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> >>> index 4d6392c16b7a..bf7627f37e69 100644
> >>> --- a/net/ipv4/tcp.c
> >>> +++ b/net/ipv4/tcp.c
> >>> @@ -789,13 +789,15 @@ ssize_t tcp_splice_read(struct socket *sock, loff_t *ppos,
> >>>        */
> >>>       if (unlikely(*ppos))
> >>>           return -ESPIPE;
> >>> +    if (unlikely(!tss.len))
> >>> +        return 0;
> >>>       ret = spliced = 0;
> >>>       lock_sock(sk);
> >>>       timeo = sock_rcvtimeo(sk, sock->file->f_flags & O_NONBLOCK);
> >>> -    while (tss.len) {
> >>> +    while (true) {
> >>>           ret = __tcp_splice_read(sk, &tss);
> >>>           if (ret < 0)
> >>>               break;
> >>> @@ -835,10 +837,10 @@ ssize_t tcp_splice_read(struct socket *sock, loff_t *ppos,
> >>>               }
> >>>               continue;
> >>>           }
> >>> -        tss.len -= ret;
> >>>           spliced += ret;
> >>> +        tss.len -= ret;
> >>
> >> The patch LGTM. The only minor thing that I note is that the above
> >> chunk is not needed. Perhaps avoiding unneeded delta could be worthy.
> >
> > It keeps it closer to the tss.len test, so I'd leave it for that reason,
> > but on the other hand the compiler should be perfectly able to optimise it
> > regardless (i.e. sub;cmp;jcc; vs sub;jcc;). I don't have a hard feeling
> > on that, can change if you want.
>
> Is there anything I can do to help here? I think the patch is
> fine, but can amend the change per Paolo's suggestion if required.
>

We prefer seeing patches focusing on the change, instead of also doing
arbitrary changes
making future backports more likely to conflict.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ