[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1415926774-49230-2-git-send-email-jaegeuk@kernel.org>
Date: Thu, 13 Nov 2014 16:59:33 -0800
From: Jaegeuk Kim <jaegeuk@...nel.org>
To: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net
Cc: Jaegeuk Kim <jaegeuk@...nel.org>
Subject: [PATCH 2/3] f2fs: fix deadlock to grab 0'th data page
The scenario is like this.
One trhead triggers:
f2fs_write_data_pages
lock_page
f2fs_write_data_page
f2fs_lock_op <- wait
The other thread triggers:
f2fs_truncate
truncate_blocks
f2fs_lock_op
truncate_partial_data_page
lock_page <- wait for locking the page
This patch resolves this bug by relocating truncate_partial_data_page.
This function is just to truncate user data page and not related to FS
consistency as well.
And, we don't need to call truncate_inline_data. Rather than that,
f2fs_write_data_page will finally update inline_data later.
Signed-off-by: Jaegeuk Kim <jaegeuk@...nel.org>
---
fs/f2fs/file.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index 54722a0..edc3ce8 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -477,8 +477,6 @@ int truncate_blocks(struct inode *inode, u64 from, bool lock)
}
if (f2fs_has_inline_data(inode)) {
- truncate_inline_data(ipage, from);
- update_inode(inode, ipage);
f2fs_put_page(ipage, 1);
goto out;
}
@@ -504,13 +502,13 @@ int truncate_blocks(struct inode *inode, u64 from, bool lock)
f2fs_put_dnode(&dn);
free_next:
err = truncate_inode_blocks(inode, free_from);
+out:
+ if (lock)
+ f2fs_unlock_op(sbi);
/* lastly zero out the first data page */
if (!err)
err = truncate_partial_data_page(inode, from);
-out:
- if (lock)
- f2fs_unlock_op(sbi);
trace_f2fs_truncate_blocks_exit(inode, err);
return err;
--
2.1.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists