[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2e4d5ba4-3525-322d-2aa6-3387d9822f5e@huawei.com>
Date: Mon, 23 May 2022 09:50:34 +0800
From: Miaohe Lin <linmiaohe@...wei.com>
To: Hugh Dickins <hughd@...gle.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
Matthew Wilcox <willy@...radead.org>
Subject: Re: [PATCH next] mm/shmem: fix shmem folio swapoff hang
On 2022/5/22 10:53, Hugh Dickins wrote:
> Shmem swapoff makes no progress: the index to indices is not incremented.
Yes, there would be a infinite loop in the while loop in shmem_unuse_inode().
> But "ret" is no longer a return value, so use folio_batch_count() instead.
>
> Fixes: da08e9b79323 ("mm/shmem: convert shmem_swapin_page() to shmem_swapin_folio()")
> Signed-off-by: Hugh Dickins <hughd@...gle.com>
This patch looks good to me! Thanks!
Reviewed-by: Miaohe Lin <linmiaohe@...wei.com>
Tested-by: Miaohe Lin <linmiaohe@...wei.com>
BTW: When I try to fix infinite loop when swap in shmem error at swapoff time, I also found this
issue last Saturday [1]. ;)
[1] https://lore.kernel.org/linux-mm/0f6dc98b-88f4-c0c9-eb7b-5356ad0e08b1@huawei.com/
> ---
>
> mm/shmem.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1171,7 +1171,6 @@ static int shmem_find_swap_entries(struc
> XA_STATE(xas, &mapping->i_pages, start);
> struct folio *folio;
> swp_entry_t entry;
> - unsigned int ret = 0;
>
> rcu_read_lock();
> xas_for_each(&xas, folio, ULONG_MAX) {
> @@ -1189,7 +1188,7 @@ static int shmem_find_swap_entries(struc
> if (swp_type(entry) != type)
> continue;
>
> - indices[ret] = xas.xa_index;
> + indices[folio_batch_count(fbatch)] = xas.xa_index;
> if (!folio_batch_add(fbatch, folio))
> break;
>
>
> .
>
Powered by blists - more mailing lists