[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <139522fa-ad23-ccce-52cb-e7fa9caf2394@kernel.org>
Date: Fri, 10 Dec 2021 22:59:35 +0800
From: Chao Yu <chao@...nel.org>
To: Hyeong-Jun Kim <hj514.kim@...sung.com>,
Fengnan Chang <changfengnan@...o.com>,
Jaegeuk Kim <jaegeuk@...nel.org>
Cc: Sungjong Seo <sj1557.seo@...sung.com>,
Youngjin Gil <youngjin.gil@...sung.com>,
linux-f2fs-devel@...ts.sourceforge.net,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] f2fs: compress: fix potential deadlock of compress file
On 2021/12/10 12:30, Hyeong-Jun Kim wrote:
> There is a potential deadlock between writeback process and a process
> performing write_begin() or write_cache_pages() while trying to write
> same compress file, but not compressable, as below:
>
> [Process A] - doing checkpoint
> [Process B] [Process C]
> f2fs_write_cache_pages()
> - lock_page() [all pages in cluster, 0-31]
> - f2fs_write_multi_pages()
> - f2fs_write_raw_pages()
> - f2fs_write_single_data_page()
> - f2fs_do_write_data_page()
> - return -EAGAIN [f2fs_trylock_op() failed]
> - unlock_page(page) [e.g., page 0]
> - generic_perform_write()
> - f2fs_write_begin()
> - f2fs_prepare_compress_overwrite()
> - prepare_compress_overwrite()
> - lock_page() [e.g., page 0]
> - lock_page() [e.g., page 1]
> - lock_page(page) [e.g., page 0]
>
> Since there is no compress process, it is no longer necessary to hold
> locks on every pages in cluster within f2fs_write_raw_pages().
>
> This patch changes f2fs_write_raw_pages() to release all locks first
> and then perform write same as the non-compress file in
> f2fs_write_cache_pages().
>
> Fixes: 4c8ff7095bef ("f2fs: support data compression")
> Signed-off-by: Hyeong-Jun Kim <hj514.kim@...sung.com>
> Signed-off-by: Sungjong Seo <sj1557.seo@...sung.com>
> Signed-off-by: Youngjin Gil <youngjin.gil@...sung.com>
Looks good to me, thanks for Fengnan and Hyeong-Jun's report and Hyeong-Jun's
fixing. :)
Reviewed-by: Chao Yu <chao@...nel.org>
Thanks,
Powered by blists - more mailing lists