[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e83b7763-daa7-af7f-ae4f-2886598ad9b0@samba.org>
Date: Fri, 10 Feb 2023 20:54:44 +0100
From: Stefan Metzmacher <metze@...ba.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>,
Jeremy Allison <jra@...ba.org>
Cc: Andy Lutomirski <luto@...nel.org>, Jens Axboe <axboe@...nel.dk>,
Linux API Mailing List <linux-api@...r.kernel.org>,
Dave Chinner <david@...morbit.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Matthew Wilcox <willy@...radead.org>,
Al Viro <viro@...iv.linux.org.uk>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Samba Technical <samba-technical@...ts.samba.org>,
io-uring <io-uring@...r.kernel.org>
Subject: Re: copy on write for splice() from file to pipe?
Am 10.02.23 um 20:42 schrieb Linus Torvalds:
> On Fri, Feb 10, 2023 at 11:27 AM Jeremy Allison <jra@...ba.org> wrote:
>>
>> 1). Client opens file with a lease. Hurrah, we think we can use splice() !
>> 2). Client writes into file.
>> 3). Client calls SMB_FLUSH to ensure data is on disk.
>> 4). Client reads the data just wrtten to ensure it's good.
>> 5). Client overwrites the previously written data.
>>
>> Now when client issues (4), the read request, if we
>> zero-copy using splice() - I don't think theres a way
>> we get notified when the data has finally left the
>> system and the mapped splice memory in the buffer cache
>> is safe to overwrite by the write (5).
>
> Well, but we know that either:
>
> (a) the client has already gotten the read reply, and does the write
> afterwards. So (4) has already not just left the network stack, but
> actually made it all the way to the client.
>
> OR
>
> (b) (4) and (5) clearly aren't ordered on the client side (ie your
> "client" is not one single thread, and did an independent read and
> overlapping write), and the client can't rely on one happening before
> the other _anyway_.
>
> So if it's (b), then you might as well do the write first, because
> there's simply no ordering between the two. If you have a concurrent
> read and a concurrent write to the same file, the read result is going
> to be random anyway.
I guess that's true, most clients won't have a problem.
However in theory it's possible that client uses a feature
called compounding, which means two requests are batched on the
way to the server they are processed sequentially and the responses
are batched on the way back again.
But we already have detection for that and the existing code also avoids
sendfile() in that case.
metze
Powered by blists - more mailing lists