[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0a67888e-6cf4-3a9d-32b1-adbbfcaf2aec@linux.alibaba.com>
Date: Wed, 26 Oct 2022 13:19:57 +0800
From: Shuai Xue <xueshuai@...ux.alibaba.com>
To: Tony Luck <tony.luck@...el.com>,
Naoya Horiguchi <naoya.horiguchi@....com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Miaohe Lin <linmiaohe@...wei.com>,
Matthew Wilcox <willy@...radead.org>,
Dan Williams <dan.j.williams@...el.com>,
Michael Ellerman <mpe@...erman.id.au>,
Nicholas Piggin <npiggin@...il.com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH v3 0/2] Copy-on-write poison recovery
在 2022/10/23 PM11:52, Shuai Xue 写道:
>
>
> 在 2022/10/22 AM4:01, Tony Luck 写道:
>> Part 1 deals with the process that triggered the copy on write
>> fault with a store to a shared read-only page. That process is
>> send a SIGBUS with the usual machine check decoration to specify
>> the virtual address of the lost page, together with the scope.
>>
>> Part 2 sets up to asynchronously take the page with the uncorrected
>> error offline to prevent additional machine check faults. H/t to
>> Miaohe Lin <linmiaohe@...wei.com> and Shuai Xue <xueshuai@...ux.alibaba.com>
>> for pointing me to the existing function to queue a call to
>> memory_failure().
>>
>> On x86 there is some duplicate reporting (because the error is
>> also signalled by the memory controller as well as by the core
>> that triggered the machine check). Console logs look like this:
>>
>> [ 1647.723403] mce: [Hardware Error]: Machine check events logged
>> Machine check from kernel copy routine
>>
>> [ 1647.723414] MCE: Killing einj_mem_uc:3600 due to hardware memory corruption fault at 7f3309503400
>> x86 fault handler sends SIGBUS to child process
>>
>> [ 1647.735183] Memory failure: 0x905b92d: recovery action for dirty LRU page: Recovered
>> Async call to memory_failure() from copy on write path
>
> The recovery action might also be handled asynchronously in CMCI uc_decode_notifier
> handler signaled by memory controller, right?
>
> I have a one more memory failure log than yours.
>
> [ 3187.485742] MCE: Killing einj_mem_uc:31746 due to hardware memory corruption fault at 7fc4bf7cf400
> [ 3187.740620] Memory failure: 0x1a3b80: recovery action for dirty LRU page: Recovered
> uc_decode_notifier() processes memory controller report
>
> [ 3187.748272] Memory failure: 0x1a3b80: already hardware poisoned
> Workqueue: events memory_failure_work_func // queued by ghes_do_memory_failure
>
> [ 3187.754194] Memory failure: 0x1a3b80: already hardware poisoned
> Workqueue: events memory_failure_work_func // queued by __wp_page_copy_user
>
> [ 3188.615920] MCE: Killing einj_mem_uc:31745 due to hardware memory corruption fault at 7fc4bf7cf400
>
> Best Regards,
> Shuai
Tested-by: Shuai Xue <xueshuai@...ux.alibaba.com>
Thank you.
Shuai
>
>>
>> [ 1647.748397] Memory failure: 0x905b92d: already hardware poisoned
>> uc_decode_notifier() processes memory controller report
>>
>> [ 1647.761313] MCE: Killing einj_mem_uc:3599 due to hardware memory corruption fault at 7f3309503400
>> Parent process tries to read poisoned page. Page has been unmapped, so
>> #PF handler sends SIGBUS
>>
>>
>> Tony Luck (2):
>> mm, hwpoison: Try to recover from copy-on write faults
>> mm, hwpoison: When copy-on-write hits poison, take page offline
>>
>> include/linux/highmem.h | 24 ++++++++++++++++++++++++
>> include/linux/mm.h | 5 ++++-
>> mm/memory.c | 32 ++++++++++++++++++++++----------
>> 3 files changed, 50 insertions(+), 11 deletions(-)
>>
Powered by blists - more mailing lists