lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 21 Jan 2022 10:57:55 +1100
From:   Dave Chinner <david@...morbit.com>
To:     Eric Biggers <ebiggers@...nel.org>
Cc:     "Darrick J. Wong" <djwong@...nel.org>,
        Christoph Hellwig <hch@...radead.org>,
        linux-fscrypt@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-ext4@...r.kernel.org, linux-f2fs-devel@...ts.sourceforge.net,
        linux-xfs@...r.kernel.org, Theodore Ts'o <tytso@....edu>,
        Jaegeuk Kim <jaegeuk@...nel.org>, Chao Yu <chao@...nel.org>
Subject: Re: [PATCH v10 0/5] add support for direct I/O with fscrypt using
 blk-crypto

On Thu, Jan 20, 2022 at 02:48:52PM -0800, Eric Biggers wrote:
> On Fri, Jan 21, 2022 at 09:04:14AM +1100, Dave Chinner wrote:
> > On Thu, Jan 20, 2022 at 01:00:27PM -0800, Darrick J. Wong wrote:
> > > On Thu, Jan 20, 2022 at 12:39:14PM -0800, Eric Biggers wrote:
> > > > On Thu, Jan 20, 2022 at 09:10:27AM -0800, Darrick J. Wong wrote:
> > > > > On Thu, Jan 20, 2022 at 12:30:23AM -0800, Christoph Hellwig wrote:
> > > > > > On Wed, Jan 19, 2022 at 11:12:10PM -0800, Eric Biggers wrote:
> > > > > > > 
> > > > > > > Given the above, as far as I know the only remaining objection to this
> > > > > > > patchset would be that DIO constraints aren't sufficiently discoverable
> > > > > > > by userspace.  Now, to put this in context, this is a longstanding issue
> > > > > > > with all Linux filesystems, except XFS which has XFS_IOC_DIOINFO.  It's
> > > > > > > not specific to this feature, and it doesn't actually seem to be too
> > > > > > > important in practice; many other filesystem features place constraints
> > > > > > > on DIO, and f2fs even *only* allows fully FS block size aligned DIO.
> > > > > > > (And for better or worse, many systems using fscrypt already have
> > > > > > > out-of-tree patches that enable DIO support, and people don't seem to
> > > > > > > have trouble with the FS block size alignment requirement.)
> > > > > > 
> > > > > > It might make sense to use this as an opportunity to implement
> > > > > > XFS_IOC_DIOINFO for ext4 and f2fs.
> > > > > 
> > > > > Hmm.  A potential problem with DIOINFO is that it doesn't explicitly
> > > > > list the /file/ position alignment requirement:
> > > > > 
> > > > > struct dioattr {
> > > > > 	__u32		d_mem;		/* data buffer memory alignment */
> > > > > 	__u32		d_miniosz;	/* min xfer size		*/
> > > > > 	__u32		d_maxiosz;	/* max xfer size		*/
> > > > > };
> > > > 
> > > > Well, the comment above struct dioattr says:
> > > > 
> > > > 	/*
> > > > 	 * Direct I/O attribute record used with XFS_IOC_DIOINFO
> > > > 	 * d_miniosz is the min xfer size, xfer size multiple and file seek offset
> > > > 	 * alignment.
> > > > 	 */
> > > > 
> > > > So d_miniosz serves that purpose already.
> > > > 
> > > > > 
> > > > > Since I /think/ fscrypt requires that directio writes be aligned to file
> > > > > block size, right?
> > > > 
> > > > The file position must be a multiple of the filesystem block size, yes.
> > > > Likewise for the "minimum xfer size" and "xfer size multiple", and the "data
> > > > buffer memory alignment" for that matter.  So I think XFS_IOC_DIOINFO would be
> > > > good enough for the fscrypt direct I/O case.
> > > 
> > > Oh, ok then.  In that case, just hoist XFS_IOC_DIOINFO to the VFS and
> > > add a couple of implementations for ext4 and f2fs, and I think that'll
> > > be enough to get the fscrypt patchset moving again.
> > 
> > On the contrary, I'd much prefer to see this information added to
> > statx(). The file offset alignment info is a property of the current
> > file (e.g. XFS can have different per-file requirements depending on
> > whether the file data is hosted on the data or RT device, etc) and
> > so it's not a fixed property of the filesystem.
> > 
> > statx() was designed to be extended with per-file property
> > information, and we already have stuff like filesystem block size in
> > that syscall. Hence I would much prefer that we extend it with the
> > DIO properties we need to support rather than "create" a new VFS
> > ioctl to extract this information. We already have statx(), so let's
> > use it for what it was intended for.
> > 
> 
> I assumed that XFS_IOC_DIOINFO *was* per-file.  XFS's *implementation* of it
> looks at the filesystem only,

You've got that wrong.

        case XFS_IOC_DIOINFO: {
>>>>>>          struct xfs_buftarg      *target = xfs_inode_buftarg(ip);
                struct dioattr          da;

                da.d_mem =  da.d_miniosz = target->bt_logical_sectorsize;

xfs_inode_buftarg() is determining which block device the inode is
storing it's data on, so the returned dioattr values can be
different for different inodes in the filesystem...

It's always been that way since the early Irix days - XFS RT devices
could have very different IO constraints than the data device and
DIO had to conform to the hardware limits underlying the filesystem.
Hence the dioattr information has -always- been per-inode
information.

> (Per-file state is required for encrypted
> files.  It's also required for other filesystem features; e.g., files that use
> compression or fs-verity don't support direct I/O at all.)

Which is exactly why is should be a property of statx(), rather than
try to re-use a ~30 year old filesystem specific API from a
different OS that was never intended to indicate things like "DIO
not supported on this file at all"....

We've been bitten many times by this "lift a rarely used filesystem
specific ioctl to the VFS because it exists" method of API
promotion. It almost always ends up in us discovering further down
the track that there's something wrong with the API, it doesn't
quite do what we need, we have to extend it anyway, or it's just
plain borken, etc. And then we have to create a new, fit for purpose
API anyway, and there's two VFS APIs we have to maintain forever
instead of just one...

Can we learn from past mistakes this time instead of repeating them
yet again?

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ