[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191023020133.GA24383@hori.linux.bs1.fc.nec.co.jp>
Date: Wed, 23 Oct 2019 02:01:33 +0000
From: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
To: Oscar Salvador <osalvador@...e.de>
CC: Michal Hocko <mhocko@...nel.org>,
"mike.kravetz@...cle.com" <mike.kravetz@...cle.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v2 10/16] mm,hwpoison: Rework soft offline for free
pages
On Tue, Oct 22, 2019 at 11:58:52AM +0200, Oscar Salvador wrote:
> On Tue, Oct 22, 2019 at 11:22:56AM +0200, Michal Hocko wrote:
> > Hmm, that might be a misunderstanding on my end. I thought that it is
> > the MCE handler to say whether the failure is recoverable or not. If yes
> > then we can touch the content of the memory (that would imply the
> > migration). Other than that both paths should be essentially the same,
> > no? Well unrecoverable case would be essentially force migration failure
> > path.
> >
> > MADV_HWPOISON is explicitly documented to test MCE handling IIUC:
> > : This feature is intended for testing of memory error-handling
> > : code; it is available only if the kernel was configured with
> > : CONFIG_MEMORY_FAILURE.
> >
> > There is no explicit note about the type of the error that is injected
> > but I think it is reasonably safe to assume this is a recoverable one.
>
> MADV_HWPOISON stands for hard-offline.
> MADV_SOFT_OFFLINE stands for soft-offline.
Maybe MADV_HWPOISON should've be named like MADV_HARD_OFFLINE, although
it's API and hard to change once implemented.
>
> MADV_SOFT_OFFLINE (since Linux 2.6.33)
> Soft offline the pages in the range specified by addr and
> length. The memory of each page in the specified range is
> preserved (i.e., when next accessed, the same content will be
> visible, but in a new physical page frame), and the original
> page is offlined (i.e., no longer used, and taken out of
> normal memory management). The effect of the
> MADV_SOFT_OFFLINE operation is invisible to (i.e., does not
> change the semantics of) the calling process.
>
> This feature is intended for testing of memory error-handling
> code;
Although this expression might not clear enough, madvise(MADV_HWPOISON or
MADV_SOFT_OFFLINE) only covers memory error handling part, not MCE handling
part. We have some other injection methods in the lower layers like
mce-inject and APEI.
> it is available only if the kernel was configured with
> CONFIG_MEMORY_FAILURE.
>
>
> But both follow different approaches.
>
> I think it is up to some controlers to trigger soft-offline or hard-offline:
Yes, I think so. One usecase of soft offline is triggered by CMCI interrupt
in Intel CPU. CMCI handler stores corrected error events in /dev/mcelog.
mcelogd polls on this device file and if corrected errors occur often enough
(IIRC the default threshold is "10 events in 24 hours",) mcelogd triggers
soft-offline via soft_offline_page under /sys.
OTOH, hard-offline is triggered directly (accurately over ring buffer to separate
context) from MCE handler. mcelogd logs MCE events but does not involve in
page offline logic.
>
> static void ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, int sev)
> {
> #ifdef CONFIG_ACPI_APEI_MEMORY_FAILURE
> ...
> /* iff following two events can be handled properly by now */
> if (sec_sev == GHES_SEV_CORRECTED &&
> (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED))
> flags = MF_SOFT_OFFLINE;
> if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE)
> flags = 0;
>
> if (flags != -1)
> memory_failure_queue(pfn, flags);
> ...
> #endif
> }
>
>
> static void memory_failure_work_func(struct work_struct *work)
> {
> ...
> for (;;) {
> spin_lock_irqsave(&mf_cpu->lock, proc_flags);
> gotten = kfifo_get(&mf_cpu->fifo, &entry);
> spin_unlock_irqrestore(&mf_cpu->lock, proc_flags);
> if (!gotten)
> break;
> if (entry.flags & MF_SOFT_OFFLINE)
> soft_offline_page(pfn_to_page(entry.pfn), entry.flags);
> else
> memory_failure(entry.pfn, entry.flags);
> }
> }
>
> AFAICS, for hard-offline case, a recovered event would be if:
>
> - the page to shut down is already free
> - the page was unmapped
>
> In some cases we need to kill the process if it holds dirty pages.
One caveat is that even if the process maps dirty error pages, we
don't have to kill it unless the error data is consumed.
Thanks,
Naoya Horiguchi
Powered by blists - more mailing lists