[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFx5Y=Rddw3ObnOm-XSUkJezfxzNGuTckD2ChVmJRNsD2g@mail.gmail.com>
Date: Sun, 18 Sep 2016 13:12:21 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Al Viro <viro@...iv.linux.org.uk>
Cc: Jens Axboe <axboe@...nel.dk>, Nick Piggin <npiggin@...il.com>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Network Development <netdev@...r.kernel.org>,
Eric Dumazet <edumazet@...gle.com>
Subject: Re: skb_splice_bits() and large chunks in pipe (was Re:
xfs_file_splice_read: possible circular locking dependency detected
On Sun, Sep 18, 2016 at 12:31 PM, Al Viro <viro@...iv.linux.org.uk> wrote:
> FWIW, I'm not sure if skb_splice_bits() can't land us in trouble; fragments
> might come from compound pages and I'm not entirely convinced that we won't
> end up with coalesced fragments putting more than PAGE_SIZE into a single
> pipe_buffer. And that could badly confuse a bunch of code.
The pipe buffer code is actually *supposed* to handle any size
allocations at all. They should *not* be limited by pages, exactly
because the data can come from huge-pages or just multi-page
allocations. It's definitely possible with networking, and networking
is one of the *primary* targets of splice in many ways.
So if the splice code ends up being confused by "this is not just
inside a single page", then the splice code is buggy, I think.
Why would splice_write() cases be confused anyway? A filesystem needs
to be able to handle the case of "this needs to be split" regardless,
since even if the source buffer were to fit in a page, the offset
might obviously mean that the target won't fit in a page.
Now, if you decide that you want to make the iterator always split
those possibly big cases and never have big iovec entries, I guess
that would potentially be ok. But my initial reaction is that they are
perfectly normal and should be handled normally, and any code that
depends on a splice buffer fitting in one page is just buggy and
should be fixed.
Linus
Powered by blists - more mailing lists