[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170703162145.GZ5874@birch.djwong.org>
Date: Mon, 3 Jul 2017 09:21:45 -0700
From: "Darrick J. Wong" <darrick.wong@...cle.com>
To: Andreas Gruenbacher <agruenba@...hat.com>
Cc: Christoph Hellwig <hch@....de>, Jan Kara <jack@...e.cz>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-xfs@...r.kernel.org, linux-ext4 <linux-ext4@...r.kernel.org>
Subject: Re: [PATCH 5/5] ext4: Switch to iomap for SEEK_HOLE / SEEK_DATA
On Mon, Jul 03, 2017 at 05:03:54PM +0200, Andreas Gruenbacher wrote:
> On Fri, Jun 30, 2017 at 7:37 PM, Christoph Hellwig <hch@....de> wrote:
> > On Fri, Jun 30, 2017 at 01:51:10PM +0200, Andreas Gruenbacher wrote:
> >> Also, ext4 no longer calls inode_lock or inode_lock_shared; that needs
> >> to be added back for consistency between reading i_size and walking
> >> the file extents.
> >
> > At least for XFS we never had such a consistency as we never took
> > the iolock (aka i_rwsem).
>
> What else does this piece of code from mainline xfs_seek_hole_data()
> do instead then?
>
> lock = xfs_ilock_data_map_shared(ip);
To avoid confusion, I'll start with ilock != iolock.
If I'm reading everything correctly, there are three levels of inode
locks that must be taken in XFS: First, the IOLOCK (aka i_rwsem) to
protect against concurrent IO when we need it, then the MMAPLOCK (istr
this was created to handle dax page faults, which don't take i_rwsem?),
and finally the ILOCK for inode core/extent map updates. I think page
locks are supposed to happen before ILOCK.
xfs_ilock_data_map_shared() is a helper that takes the ILOCK in shared
or exclusive mode depending on whether or not all the inode metadata
have already been cached in memory.
The ILOCK must be held before reading or writing the extent map.
--D
>
> end = i_size_read(inode);
> offset = __xfs_seek_hole_data(inode, start, end, whence);
>
> Thanks,
> Andreas
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists