[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090113.160721.68474279.davem@davemloft.net>
Date: Tue, 13 Jan 2009 16:07:21 -0800 (PST)
From: David Miller <davem@...emloft.net>
To: zbr@...emap.net
Cc: dada1@...mosbay.com, w@....eu, ben@...s.com, jarkao2@...il.com,
mingo@...e.hu, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org, jens.axboe@...cle.com
Subject: Re: [PATCH] tcp: splice as many packets as possible at once
From: Evgeniy Polyakov <zbr@...emap.net>
Date: Sun, 11 Jan 2009 19:05:48 +0300
> On Sun, Jan 11, 2009 at 05:00:37PM +0100, Eric Dumazet (dada1@...mosbay.com) wrote:
> > While we drop lock in skb_splice_bits() to prevent the deadlock, we
> > also process backlog at this stage. No need to process backlog
> > again in the higher level function.
>
> Yes, but having it earlier allows to receive new skb while processing
> already received.
>
> > > I think that even with non-blocking splice that release_sock/lock_sock
> > > is needed, since we are able to do a parallel job: to receive new data
> > > (scheduled by early release_sock backlog processing) in bh and to
> > > process already received data via splice codepath.
> > > Maybe in non-blocking splice mode this is not an issue though, but for
> > > the blocking mode this allows to grab more skbs at once in skb_splice_bits.
> >
> > skb_splice_bits() operates on one skb, you lost me :)
>
> Exactly, and to have it we earlier release a socket so that it could be
> acked and while we copy it or doing anything else, the next one would
> received.
I think the socket release in skb_splice_bits() (although necessary)
just muddies the waters, and whether the extra one done in
tcp_splice_read() helps at all is open to debate.
That skb_clone() done by skb_splice_bits() pisses me off too,
we really ought to fix that. And we also have that data corruption
bug to cure too.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists