[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210513190114.GJ2734@quack2.suse.cz>
Date: Thu, 13 May 2021 21:01:14 +0200
From: Jan Kara <jack@...e.cz>
To: Matthew Wilcox <willy@...radead.org>
Cc: Jan Kara <jack@...e.cz>, linux-fsdevel@...r.kernel.org,
Christoph Hellwig <hch@...radead.org>,
Dave Chinner <david@...morbit.com>, ceph-devel@...r.kernel.org,
Chao Yu <yuchao0@...wei.com>,
Damien Le Moal <damien.lemoal@....com>,
"Darrick J. Wong" <darrick.wong@...cle.com>,
Jaegeuk Kim <jaegeuk@...nel.org>,
Jeff Layton <jlayton@...nel.org>,
Johannes Thumshirn <jth@...nel.org>,
linux-cifs@...r.kernel.org, linux-ext4@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net, linux-mm@...ck.org,
linux-xfs@...r.kernel.org, Miklos Szeredi <miklos@...redi.hu>,
Steve French <sfrench@...ba.org>, Ted Tso <tytso@....edu>
Subject: Re: [PATCH 03/11] mm: Protect operations adding pages to page cache
with invalidate_lock
On Wed 12-05-21 15:40:21, Matthew Wilcox wrote:
> On Wed, May 12, 2021 at 03:46:11PM +0200, Jan Kara wrote:
> > Currently, serializing operations such as page fault, read, or readahead
> > against hole punching is rather difficult. The basic race scheme is
> > like:
> >
> > fallocate(FALLOC_FL_PUNCH_HOLE) read / fault / ..
> > truncate_inode_pages_range()
> > <create pages in page
> > cache here>
> > <update fs block mapping and free blocks>
> >
> > Now the problem is in this way read / page fault / readahead can
> > instantiate pages in page cache with potentially stale data (if blocks
> > get quickly reused). Avoiding this race is not simple - page locks do
> > not work because we want to make sure there are *no* pages in given
> > range. inode->i_rwsem does not work because page fault happens under
> > mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes
> > the performance for mixed read-write workloads suffer.
> >
> > So create a new rw_semaphore in the address_space - invalidate_lock -
> > that protects adding of pages to page cache for page faults / reads /
> > readahead.
>
> Remind me (or, rather, add to the documentation) why we have to hold the
> invalidate_lock during the call to readpage / readahead, and we don't just
> hold it around the call to add_to_page_cache / add_to_page_cache_locked
> / add_to_page_cache_lru ? I appreciate that ->readpages is still going
> to suck, but we're down to just three implementations of ->readpages now
> (9p, cifs & nfs).
There's a comment in filemap_create_page() trying to explain this. We need
to protect against cases like: Filesystem with 1k blocksize, file F has
page at index 0 with uptodate buffer at 0-1k, rest not uptodate. All blocks
underlying page are allocated. Now let read at offset 1k race with hole
punch at offset 1k, length 1k.
read() hole punch
...
filemap_read()
filemap_get_pages()
- page found in the page cache but !Uptodate
filemap_update_page()
locks everything
truncate_inode_pages_range()
lock_page(page)
do_invalidatepage()
unlock_page(page)
locks page
filemap_read_page()
->readpage()
block underlying offset 1k
still allocated -> map buffer
free block under offset 1k
submit IO -> corrupted data
If you think I should expand it to explain more details, please tell.
Or maybe I can put more detailed discussion like above into the changelog?
> Also, could I trouble you to run the comments through 'fmt' (or
> equivalent)? It's easier to read if you're not kissing right up on 80
> columns.
Sure, will do.
> > +++ b/fs/inode.c
> > @@ -190,6 +190,9 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
> > mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE);
> > mapping->private_data = NULL;
> > mapping->writeback_index = 0;
> > + init_rwsem(&mapping->invalidate_lock);
> > + lockdep_set_class(&mapping->invalidate_lock,
> > + &sb->s_type->invalidate_lock_key);
>
> Why not:
>
> __init_rwsem(&mapping->invalidate_lock, "mapping.invalidate_lock",
> &sb->s_type->invalidate_lock_key);
I replicated what we do for i_rwsem but you're right, this is better.
Updated.
Honza
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists