[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231129185406.GBZWeIzqwgRQe7XDo/@fat_crate.local>
Date: Wed, 29 Nov 2023 19:54:06 +0100
From: Borislav Petkov <bp@...en8.de>
To: Shuai Xue <xueshuai@...ux.alibaba.com>, james.morse@....com
Cc: rafael@...nel.org, wangkefeng.wang@...wei.com,
tanxiaofei@...wei.com, mawupeng1@...wei.com, tony.luck@...el.com,
linmiaohe@...wei.com, naoya.horiguchi@....com,
gregkh@...uxfoundation.org, will@...nel.org, jarkko@...nel.org,
linux-acpi@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
linux-edac@...r.kernel.org, acpica-devel@...ts.linuxfoundation.org,
stable@...r.kernel.org, x86@...nel.org, justin.he@....com,
ardb@...nel.org, ying.huang@...el.com, ashish.kalra@....com,
baolin.wang@...ux.alibaba.com, tglx@...utronix.de,
mingo@...hat.com, dave.hansen@...ux.intel.com, lenb@...nel.org,
hpa@...or.com, robert.moore@...el.com, lvying6@...wei.com,
xiexiuqi@...wei.com, zhuo.song@...ux.alibaba.com
Subject: Re: [PATCH v9 0/2] ACPI: APEI: handle synchronous errors in task
work with proper si_code
Moving James to To:
On Sun, Nov 26, 2023 at 08:25:38PM +0800, Shuai Xue wrote:
> > On Sat, Nov 25, 2023 at 02:44:52PM +0800, Shuai Xue wrote:
> >> - an AR error consumed by current process is deferred to handle in a
> >> dedicated kernel thread, but memory_failure() assumes that it runs in the
> >> current context
> >
> > On x86? ARM?
> >
> > Pease point to the exact code flow.
>
> An AR error consumed by current process is deferred to handle in a
> dedicated kernel thread on ARM platform. The AR error is handled in bellow
> flow:
>
> -----------------------------------------------------------------------------
> [usr space task einj_mem_uc consumd data poison, CPU 3] STEP 0
>
> -----------------------------------------------------------------------------
> [ghes_sdei_critical_callback: current einj_mem_uc, CPU 3] STEP 1
> ghes_sdei_critical_callback
> => __ghes_sdei_callback
> => ghes_in_nmi_queue_one_entry // peak and read estatus
> => irq_work_queue(&ghes_proc_irq_work) <=> ghes_proc_in_irq // irq_work
> [ghes_sdei_critical_callback: return]
> -----------------------------------------------------------------------------
> [ghes_proc_in_irq: current einj_mem_uc, CPU 3] STEP 2
> => ghes_do_proc
> => ghes_handle_memory_failure
> => ghes_do_memory_failure
> => memory_failure_queue // put work task on current CPU
> => if (kfifo_put(&mf_cpu->fifo, entry))
> schedule_work_on(smp_processor_id(), &mf_cpu->work);
> => task_work_add(current, &estatus_node->task_work, TWA_RESUME);
> [ghes_proc_in_irq: return]
> -----------------------------------------------------------------------------
> // kworker preempts einj_mem_uc on CPU 3 due to RESCHED flag STEP 3
> [memory_failure_work_func: current kworker, CPU 3]
> => memory_failure_work_func(&mf_cpu->work)
> => while kfifo_get(&mf_cpu->fifo, &entry); // until get no work
> => memory_failure(entry.pfn, entry.flags);
>From the comment above that function:
* The function is primarily of use for corruptions that
* happen outside the current execution context (e.g. when
* detected by a background scrubber)
*
* Must run in process context (e.g. a work queue) with interrupts
* enabled and no spinlocks held.
> -----------------------------------------------------------------------------
> [ghes_kick_task_work: current einj_mem_uc, other cpu] STEP 4
> => memory_failure_queue_kick
> => cancel_work_sync - waiting memory_failure_work_func finish
> => memory_failure_work_func(&mf_cpu->work)
> => kfifo_get(&mf_cpu->fifo, &entry); // no work
> -----------------------------------------------------------------------------
> [einj_mem_uc resume at the same PC, trigger a page fault STEP 5
>
> STEP 0: A user space task, named einj_mem_uc consume a poison. The firmware
> notifies hardware error to kernel through is SDEI
> (ACPI_HEST_NOTIFY_SOFTWARE_DELEGATED).
>
> STEP 1: The swapper running on CPU 3 is interrupted. irq_work_queue() rasie
> a irq_work to handle hardware errors in IRQ context
>
> STEP2: In IRQ context, ghes_proc_in_irq() queues memory failure work on
> current CPU in workqueue and add task work to sync with the workqueue.
>
> STEP3: The kworker preempts the current running thread and get CPU 3. Then
> memory_failure() is processed in kworker.
See above.
> STEP4: ghes_kick_task_work() is called as task_work to ensure any queued
> workqueue has been done before returning to user-space.
>
> STEP5: Upon returning to user-space, the task einj_mem_uc resumes at the
> current instruction, because the poison page is unmapped by
> memory_failure() in step 3, so a page fault will be triggered.
>
> memory_failure() assumes that it runs in the current context on both x86
> and ARM platform.
>
>
> for example:
> memory_failure() in mm/memory-failure.c:
>
> if (flags & MF_ACTION_REQUIRED) {
> folio = page_folio(p);
> res = kill_accessing_process(current, folio_pfn(folio), flags);
> }
And?
Do you see the check above it?
if (TestSetPageHWPoison(p)) {
test_and_set_bit() returns true only when the page was poisoned already.
* This function is intended to handle "Action Required" MCEs on already
* hardware poisoned pages. They could happen, for example, when
* memory_failure() failed to unmap the error page at the first call, or
* when multiple local machine checks happened on different CPUs.
And that's kill_accessing_process().
So AFAIU, the kworker running memory_failure() would only mark the page
as poison.
The killing happens when memory_failure() runs again and the process
touches the page again.
But I'd let James confirm here.
I still don't know what you're fixing here.
Is this something you're encountering on some machine or you simply
stared at code?
What does that
"Both Alibaba and Huawei met the same issue in products, and we hope it
could be fixed ASAP."
mean?
What did you meet?
What was the problem?
I still note that you're avoiding answering the question what the issue
is and if you keep avoiding it, I'll ignore this whole thread.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
Powered by blists - more mailing lists