[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220304082714.GB3778609@hori.linux.bs1.fc.nec.co.jp>
Date: Fri, 4 Mar 2022 08:27:14 +0000
From: HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>
To: Miaohe Lin <linmiaohe@...wei.com>
CC: "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/4] mm/memory-failure.c: fix wrong user reference report
On Mon, Feb 28, 2022 at 10:02:43PM +0800, Miaohe Lin wrote:
> The dirty swapcache page is still residing in the swap cache after it's
> hwpoisoned. So there is always one extra refcount for swap cache.
The diff seems fine at a glance, but let me have a few question to
understand the issue more.
- Is the behavior described above the effect of recent change on shmem where
dirty pagecache is pinned on hwpoison (commit a76054266661 ("mm: shmem:
don't truncate page if memory failure happens"). Or the older kernels
behave as the same?
- Is the behavior true for normal anonymous pages (not shmem pages)?
I'm trying to test hwpoison hitting the dirty swapcache, but it seems that
in my testing memory_faliure() fails with "hwpoison: unhandlable page"
warning at get_any_page(). So I'm still not sure that me_pagecache_dirty()
fixes any visible problem.
Thanks,
Naoya Horiguchi
>
> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
> ---
> mm/memory-failure.c | 6 +-----
> 1 file changed, 1 insertion(+), 5 deletions(-)
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 0d7c58340a98..5f9503573263 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -984,7 +984,6 @@ static int me_pagecache_dirty(struct page_state *ps, struct page *p)
> static int me_swapcache_dirty(struct page_state *ps, struct page *p)
> {
> int ret;
> - bool extra_pins = false;
>
> ClearPageDirty(p);
> /* Trigger EIO in shmem: */
> @@ -993,10 +992,7 @@ static int me_swapcache_dirty(struct page_state *ps, struct page *p)
> ret = delete_from_lru_cache(p) ? MF_FAILED : MF_DELAYED;
> unlock_page(p);
>
> - if (ret == MF_DELAYED)
> - extra_pins = true;
> -
> - if (has_extra_refcount(ps, p, extra_pins))
> + if (has_extra_refcount(ps, p, true))
> ret = MF_FAILED;
>
> return ret;
> --
> 2.23.0
Powered by blists - more mailing lists