[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221216014729.GA2116060@hori.linux.bs1.fc.nec.co.jp>
Date: Fri, 16 Dec 2022 01:47:31 +0000
From: HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>
To: Kefeng Wang <wangkefeng.wang@...wei.com>
CC: "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"tony.luck@...el.com" <tony.luck@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linmiaohe@...wei.com" <linmiaohe@...wei.com>,
David Hildenbrand <david@...hat.com>
Subject: Re: [PATCH -next resend v3] mm: hwposion: support recovery from
ksm_might_need_to_copy()
On Tue, Dec 13, 2022 at 08:05:23PM +0800, Kefeng Wang wrote:
> When the kernel copy a page from ksm_might_need_to_copy(), but runs
> into an uncorrectable error, it will crash since poisoned page is
> consumed by kernel, this is similar to Copy-on-write poison recovery,
Maybe you mean "this is similar to the issue recently fixed by
Copy-on-write poison recovery."? And if this sentence ends here,
please put "." instead of ",".
> When an error is detected during the page copy, return VM_FAULT_HWPOISON
> in do_swap_page(), and install a hwpoison entry in unuse_pte() when
> swapoff, which help us to avoid system crash. Note, memory failure on
> a KSM page will be skipped, but still call memory_failure_queue() to
> be consistent with general memory failure process.
Thank you for the work. I have a few comment below ...
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@...wei.com>
> ---
> v3 resend:
> - enhance unuse_pte() if ksm_might_need_to_copy() return -EHWPOISON
> - fix issue found by lkp
>
> mm/ksm.c | 8 ++++++--
> mm/memory.c | 3 +++
> mm/swapfile.c | 20 ++++++++++++++------
> 3 files changed, 23 insertions(+), 8 deletions(-)
>
> diff --git a/mm/ksm.c b/mm/ksm.c
> index dd02780c387f..83e2f74ae7da 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -2629,8 +2629,12 @@ struct page *ksm_might_need_to_copy(struct page *page,
> new_page = NULL;
> }
> if (new_page) {
> - copy_user_highpage(new_page, page, address, vma);
> -
> + if (copy_mc_user_highpage(new_page, page, address, vma)) {
> + put_page(new_page);
> + new_page = ERR_PTR(-EHWPOISON);
> + memory_failure_queue(page_to_pfn(page), 0);
> + return new_page;
Simply return ERR_PTR(-EHWPOISON)?
> + }
> SetPageDirty(new_page);
> __SetPageUptodate(new_page);
> __SetPageLocked(new_page);
> diff --git a/mm/memory.c b/mm/memory.c
> index aad226daf41b..5b2c137dfb2a 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3840,6 +3840,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> if (unlikely(!page)) {
> ret = VM_FAULT_OOM;
> goto out_page;
> + } else if (unlikely(PTR_ERR(page) == -EHWPOISON)) {
> + ret = VM_FAULT_HWPOISON;
> + goto out_page;
> }
> folio = page_folio(page);
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 908a529bca12..0efb1c2c2415 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1763,12 +1763,15 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
> struct page *swapcache;
> spinlock_t *ptl;
> pte_t *pte, new_pte;
> + bool hwposioned = false;
> int ret = 1;
>
> swapcache = page;
> page = ksm_might_need_to_copy(page, vma, addr);
> if (unlikely(!page))
> return -ENOMEM;
> + else if (unlikely(PTR_ERR(page) == -EHWPOISON))
> + hwposioned = true;
>
> pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
> if (unlikely(!pte_same_as_swp(*pte, swp_entry_to_pte(entry)))) {
> @@ -1776,15 +1779,19 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
> goto out;
> }
>
> - if (unlikely(!PageUptodate(page))) {
> - pte_t pteval;
> + if (hwposioned || !PageUptodate(page)) {
> + swp_entry_t swp_entry;
>
> dec_mm_counter(vma->vm_mm, MM_SWAPENTS);
> - pteval = swp_entry_to_pte(make_swapin_error_entry());
> - set_pte_at(vma->vm_mm, addr, pte, pteval);
> - swap_free(entry);
> + if (hwposioned) {
> + swp_entry = make_hwpoison_entry(swapcache);
> + page = swapcache;
This might work for the process accessing to the broken page, but ksm
pages are likely to be shared by multiple processes, so it would be
much nicer if you can convert all mapping entries for the error ksm page
into hwpoisoned ones. Maybe in this thorough approach,
hwpoison_user_mappings() is updated to call try_to_unmap() for ksm pages.
But it's not necessary to do this together with applying mcsafe-memcpy,
because recovery action and mcsafe-memcpy can be done independently.
Thanks,
Naoya Horiguchi
> + } else {
> + swp_entry = make_swapin_error_entry();
> + }
> + new_pte = swp_entry_to_pte(swp_entry);
> ret = 0;
> - goto out;
> + goto setpte;
> }
>
> /* See do_swap_page() */
> @@ -1816,6 +1823,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
> new_pte = pte_mksoft_dirty(new_pte);
> if (pte_swp_uffd_wp(*pte))
> new_pte = pte_mkuffd_wp(new_pte);
> +setpte:
> set_pte_at(vma->vm_mm, addr, pte, new_pte);
> swap_free(entry);
> out:
> --
> 2.35.3
Powered by blists - more mailing lists