lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090111133533.GB25337@ioremap.net>
Date:	Sun, 11 Jan 2009 16:35:33 +0300
From:	Evgeniy Polyakov <zbr@...emap.net>
To:	Eric Dumazet <dada1@...mosbay.com>
Cc:	Willy Tarreau <w@....eu>, David Miller <davem@...emloft.net>,
	ben@...s.com, jarkao2@...il.com, mingo@...e.hu,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	jens.axboe@...cle.com
Subject: Re: [PATCH] tcp: splice as many packets as possible at once

On Sun, Jan 11, 2009 at 02:14:57PM +0100, Eric Dumazet (dada1@...mosbay.com) wrote:
> >> 1) the release_sock/lock_sock done in tcp_splice_read() is not necessary
> >> to process backlog. Its already done in skb_splice_bits()
> > 
> > Yes, in the tcp_splice_read() they are added to remove a deadlock.
> 
> Could you elaborate ? A deadlock only if !SPLICE_F_NONBLOCK ?

Sorry, I meant that we drop lock in skb_splice_bits() to prevent the deadlock,
and tcp_splice_read() needs it to process the backlog.

I think that even with non-blocking splice that release_sock/lock_sock
is needed, since we are able to do a parallel job: to receive new data
(scheduled by early release_sock backlog processing) in bh and to
process already received data via splice codepath.
Maybe in non-blocking splice mode this is not an issue though, but for
the blocking mode this allows to grab more skbs at once in skb_splice_bits.

> > 
> >> 2) If we loop in tcp_read_sock() calling skb_splice_bits() several times
> >> then we should perform the following tests inside this loop ?
> >>
> >> if (sk->sk_err || sk->sk_state == TCP_CLOSE || (sk->sk_shutdown & RCV_SHUTDOWN) ||
> >>    signal_pending(current)) break;
> >>
> >> And removie them from tcp_splice_read() ?
> > 
> > It could be done, but for what reason? To detect disconnected socket early?
> > Does it worth the changes?
> 
> I was thinking about the case your thread is doing a splice() from tcp socket to
>  a pipe, while another thread is doing the splice from this pipe to something else.
> 
> Once patched, tcp_read_sock() could loop a long time...
 
Well, it maybe a good idea... Can not say anything against it :)

-- 
	Evgeniy Polyakov
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ