[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9ae5f07f-f4c5-69eb-bcb1-8bcbc15cbd09@kernel.dk>
Date: Thu, 9 Sep 2021 21:22:30 -0600
From: Jens Axboe <axboe@...nel.dk>
To: Al Viro <viro@...iv.linux.org.uk>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Pavel Begunkov <asml.silence@...il.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [git pull] iov_iter fixes
On 9/9/21 9:11 PM, Al Viro wrote:
> On Thu, Sep 09, 2021 at 09:05:13PM -0600, Jens Axboe wrote:
>> On 9/9/21 8:57 PM, Al Viro wrote:
>>> On Thu, Sep 09, 2021 at 03:19:56PM -0600, Jens Axboe wrote:
>>>
>>>> Not sure how we'd do that, outside of stupid tricks like copy the
>>>> iov_iter before we pass it down. But that's obviously not going to be
>>>> very efficient. Hence we're left with having some way to reset/reexpand,
>>>> even in the presence of someone having done truncate on it.
>>>
>>> "Obviously" why, exactly? It's not that large a structure; it's not
>>> the optimal variant, but I'd like to see profiling data before assuming
>>> that it'll cause noticable slowdowns.
>>
>> It's 48 bytes, and we have to do it upfront. That means we'd be doing it
>> for _all_ requests, not just when we need to retry. As an example, current
>> benchmarks are at ~4M read requests per core. That'd add ~200MB/sec of
>> memory traffic just doing this copy.
>
> Umm... How much of that will be handled by cache?
Depends? And what if the iovec itself has been modified in the middle?
We'd need to copy that whole thing too. It's just not workable as a
solution.
>> Besides, I think that's moot as there's a better way.
>
> I hope so, but I'm afraid that "let's reload from userland on e.g. short
> reads" is not better - there's a plenty of interesting corner cases you
> need to handle with that.
As long as we're still in the context of the submission, it is tractable
provided we import it like we did originally.
--
Jens Axboe
Powered by blists - more mailing lists