lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20240102123918.799062-23-yi.zhang@huaweicloud.com> Date: Tue, 2 Jan 2024 20:39:15 +0800 From: Zhang Yi <yi.zhang@...weicloud.com> To: linux-ext4@...r.kernel.org Cc: linux-fsdevel@...r.kernel.org, tytso@....edu, adilger.kernel@...ger.ca, jack@...e.cz, ritesh.list@...il.com, hch@...radead.org, djwong@...nel.org, willy@...radead.org, yi.zhang@...wei.com, yi.zhang@...weicloud.com, chengzhihao1@...wei.com, yukuai3@...wei.com, wangkefeng.wang@...wei.com Subject: [RFC PATCH v2 22/25] ext4: fall back to buffer_head path for defrag From: Zhang Yi <yi.zhang@...wei.com> Online defrag doesn't support iomap path yet, so we have to fall back to buffer_head path for inodes which have been useing iomap. Fall back active inode is dangerous, we must writeback and drop all dirty pages under inode lock and mapping->invalidate_lock, those can protect us from adding new folios into mapping. Signed-off-by: Zhang Yi <yi.zhang@...wei.com> --- fs/ext4/move_extent.c | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c index 3aa57376d9c2..7a9ca71d4cac 100644 --- a/fs/ext4/move_extent.c +++ b/fs/ext4/move_extent.c @@ -538,6 +538,34 @@ mext_check_arguments(struct inode *orig_inode, return 0; } +/* + * Disable buffered iomap path for the inode that requiring move extents, + * fallback to buffer_head path. + */ +static int ext4_disable_buffered_iomap_aops(struct inode *inode) +{ + int err; + + /* + * The buffered_head aops don't know how to handle folios + * dirtied by iomap, so before falling back, flush all dirty + * folios the inode has. + */ + filemap_invalidate_lock(inode->i_mapping); + err = filemap_write_and_wait(inode->i_mapping); + if (err < 0) { + filemap_invalidate_unlock(inode->i_mapping); + return err; + } + truncate_inode_pages(inode->i_mapping, 0); + + ext4_clear_inode_state(inode, EXT4_STATE_BUFFERED_IOMAP); + ext4_set_aops(inode); + filemap_invalidate_unlock(inode->i_mapping); + + return 0; +} + /** * ext4_move_extents - Exchange the specified range of a file * @@ -609,6 +637,12 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp, __u64 orig_blk, inode_dio_wait(orig_inode); inode_dio_wait(donor_inode); + /* Fallback to buffer_head aops for inodes with buffered iomap aops */ + if (ext4_test_inode_state(orig_inode, EXT4_STATE_BUFFERED_IOMAP)) + ext4_disable_buffered_iomap_aops(orig_inode); + if (ext4_test_inode_state(donor_inode, EXT4_STATE_BUFFERED_IOMAP)) + ext4_disable_buffered_iomap_aops(donor_inode); + /* Protect extent tree against block allocations via delalloc */ ext4_double_down_write_data_sem(orig_inode, donor_inode); /* Check the filesystem environment whether move_extent can be done */ -- 2.39.2
Powered by blists - more mailing lists