[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <524464E6.2060006@redhat.com>
Date: Thu, 26 Sep 2013 12:46:30 -0400
From: Ric Wheeler <rwheeler@...hat.com>
To: "J. Bruce Fields" <bfields@...ldses.org>
CC: Miklos Szeredi <miklos@...redi.hu>, Zach Brown <zab@...hat.com>,
Anna Schumaker <schumaker.anna@...il.com>,
Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux-Fsdevel <linux-fsdevel@...r.kernel.org>,
"linux-nfs@...r.kernel.org" <linux-nfs@...r.kernel.org>,
Trond Myklebust <Trond.Myklebust@...app.com>,
Bryan Schumaker <bjschuma@...app.com>,
"Martin K. Petersen" <mkp@....net>, Jens Axboe <axboe@...nel.dk>,
Mark Fasheh <mfasheh@...e.com>,
Joel Becker <jlbec@...lplan.org>,
Eric Wong <normalperson@...t.net>
Subject: Re: [RFC] extending splice for copy offloading
On 09/26/2013 11:34 AM, J. Bruce Fields wrote:
> On Thu, Sep 26, 2013 at 10:58:05AM +0200, Miklos Szeredi wrote:
>> On Wed, Sep 25, 2013 at 11:07 PM, Zach Brown <zab@...hat.com> wrote:
>>>> A client-side copy will be slower, but I guess it does have the
>>>> advantage that the application can track progress to some degree, and
>>>> abort it fairly quickly without leaving the file in a totally undefined
>>>> state--and both might be useful if the copy's not a simple constant-time
>>>> operation.
>>> I suppose, but can't the app achieve a nice middle ground by copying the
>>> file in smaller syscalls? Avoid bulk data motion back to the client,
>>> but still get notification every, I dunno, few hundred meg?
>> Yes. And if "cp" could just be switched from a read+write syscall
>> pair to a single splice syscall using the same buffer size.
> Will the various magic fs-specific copy operations become inefficient
> when the range copied is too small?
>
> (Totally naive question, as I have no idea how they really work.)
>
> --b.
I think that is not really possible to tell when we invoke it. It is very much
target device (or file system, etc) dependent on how long it takes. It could be
as simple as a reflink copying in a smallish amount of metadata or fall back to
a full byte-by-byte copy. Also note that speed is not the only impact here,
some of the mechanisms actually do not consume more space (just increment shared
data references).
It would probably make more sense to send it off to the target device and have
it return an error when not appropriate (then the app can fall back to the old
fashion copy).
ric
>
>> And then
>> the user would only notice that things got faster in case of server
>> side copy. No problems with long blocking times (at least not much
>> worse than it was).
>>
>> However "cp" doesn't do reflinking by default, it has a switch for
>> that. If we just want "cp" and the like to use splice without fearing
>> side effects then by default we should try to be as close to
>> read+write behavior as possible. No? That's what I'm really
>> worrying about when you want to wire up splice to reflink by default.
>> I do think there should be a flag for that. And if on the block level
>> some magic happens, so be it. It's not the fs deverloper's worry any
>> more ;)
>>
>> Thanks,
>> Miklos
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>> the body of a message to majordomo@...r.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists