[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210126074956.GF4626@dread.disaster.area>
Date: Tue, 26 Jan 2021 18:49:56 +1100
From: Dave Chinner <david@...morbit.com>
To: Nicolas Boichat <drinkcat@...omium.org>
Cc: "Darrick J. Wong" <djwong@...nel.org>,
linux-fsdevel@...r.kernel.org, lkml <linux-kernel@...r.kernel.org>,
Amir Goldstein <amir73il@...il.com>,
Dave Chinner <dchinner@...hat.com>,
Luis Lozano <llozano@...omium.org>, iant@...gle.com
Subject: Re: [BUG] copy_file_range with sysfs file as input
On Tue, Jan 26, 2021 at 11:50:50AM +0800, Nicolas Boichat wrote:
> On Tue, Jan 26, 2021 at 9:34 AM Dave Chinner <david@...morbit.com> wrote:
> >
> > On Mon, Jan 25, 2021 at 03:54:31PM +0800, Nicolas Boichat wrote:
> > > Hi copy_file_range experts,
> > >
> > > We hit this interesting issue when upgrading Go compiler from 1.13 to
> > > 1.15 [1]. Basically we use Go's `io.Copy` to copy the content of
> > > `/sys/kernel/debug/tracing/trace` to a temporary file.
> > >
> > > Under the hood, Go now uses `copy_file_range` syscall to optimize the
> > > copy operation. However, that fails to copy any content when the input
> > > file is from sysfs/tracefs, with an apparent size of 0 (but there is
> > > still content when you `cat` it, of course).
> > >
> > > A repro case is available in comment7 (adapted from the man page),
> > > also copied below [2].
> > >
> > > Output looks like this (on kernels 5.4.89 (chromeos), 5.7.17 and
> > > 5.10.3 (chromeos))
> > > $ ./copyfrom /sys/kernel/debug/tracing/trace x
> > > 0 bytes copied
> >
> > That's basically telling you that copy_file_range() was unable to
> > copy anything. The man page says:
> >
> > RETURN VALUE
> > Upon successful completion, copy_file_range() will return
> > the number of bytes copied between files. This could be less
> > than the length originally requested. If the file offset
> > of fd_in is at or past the end of file, no bytes are copied,
> > and copy_file_range() returns zero.
> >
> > THe man page explains it perfectly.
>
> I'm not that confident the explanation is perfect ,-)
>
> How does one define "EOF"? The read manpage
> (https://man7.org/linux/man-pages/man2/read.2.html) defines it as a
> zero return value.
And so does copy_file_range(). That's the -API definition-, it does
not define the kernel implementation of how to decide when the file
is at EOF.
> I don't think using the inode file size is
> standard.
It is the standard VFS filesystem definition of EOF.
Indeed:
copy_file_range()
vfs_copy_file_range()
generic_copy_file_checks()
.....
/* Shorten the copy to EOF */
size_in = i_size_read(inode_in);
if (pos_in >= size_in)
count = 0;
else
count = min(count, size_in - (uint64_t)pos_in);
That inode size check for EOF is -exactly- what is triggering here,
and a copy of zero length returns 0 bytes having done nothing.
The page cache read path does similar things in
generic_file_buffered_read() to avoid truncate races exposing
stale/bad data to userspace:
/*
* i_size must be checked after we know the pages are Uptodate.
*
* Checking i_size after the check allows us to calculate
* the correct value for "nr", which means the zero-filled
* part of the page is not copied back to userspace (unless
* another truncate extends the file - this is desired though).
*/
isize = i_size_read(inode);
if (unlikely(iocb->ki_pos >= isize))
goto put_pages;
> > 'cat' "works" in this situation because it doesn't check the file
> > size and just attempts to read unconditionally from the file. Hence
> > it happily returns non-existent stale data from busted filesystem
> > implementations that allow data to be read from beyond EOF...
>
> `cp` also works, so does `dd` and basically any other file operation.
They do not use a syscall interface that can offload work to
filesystems, low level block layer software, hardware and/or remote
systems. copy_file_range() is restricted to regular files and does
complex stuff that read() and friends will never do, so we have
strictly enforced rules to prevent people from playing fast and
loose and silently corrupting user data with it....
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists