[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191022095852.GB20429@linux>
Date: Tue, 22 Oct 2019 11:58:52 +0200
From: Oscar Salvador <osalvador@...e.de>
To: Michal Hocko <mhocko@...nel.org>
Cc: n-horiguchi@...jp.nec.com, mike.kravetz@...cle.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2 10/16] mm,hwpoison: Rework soft offline for free
pages
On Tue, Oct 22, 2019 at 11:22:56AM +0200, Michal Hocko wrote:
> Hmm, that might be a misunderstanding on my end. I thought that it is
> the MCE handler to say whether the failure is recoverable or not. If yes
> then we can touch the content of the memory (that would imply the
> migration). Other than that both paths should be essentially the same,
> no? Well unrecoverable case would be essentially force migration failure
> path.
>
> MADV_HWPOISON is explicitly documented to test MCE handling IIUC:
> : This feature is intended for testing of memory error-handling
> : code; it is available only if the kernel was configured with
> : CONFIG_MEMORY_FAILURE.
>
> There is no explicit note about the type of the error that is injected
> but I think it is reasonably safe to assume this is a recoverable one.
MADV_HWPOISON stands for hard-offline.
MADV_SOFT_OFFLINE stands for soft-offline.
MADV_SOFT_OFFLINE (since Linux 2.6.33)
Soft offline the pages in the range specified by addr and
length. The memory of each page in the specified range is
preserved (i.e., when next accessed, the same content will be
visible, but in a new physical page frame), and the original
page is offlined (i.e., no longer used, and taken out of
normal memory management). The effect of the
MADV_SOFT_OFFLINE operation is invisible to (i.e., does not
change the semantics of) the calling process.
This feature is intended for testing of memory error-handling
code; it is available only if the kernel was configured with
CONFIG_MEMORY_FAILURE.
But both follow different approaches.
I think it is up to some controlers to trigger soft-offline or hard-offline:
static void ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, int sev)
{
#ifdef CONFIG_ACPI_APEI_MEMORY_FAILURE
...
/* iff following two events can be handled properly by now */
if (sec_sev == GHES_SEV_CORRECTED &&
(gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED))
flags = MF_SOFT_OFFLINE;
if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE)
flags = 0;
if (flags != -1)
memory_failure_queue(pfn, flags);
...
#endif
}
static void memory_failure_work_func(struct work_struct *work)
{
...
for (;;) {
spin_lock_irqsave(&mf_cpu->lock, proc_flags);
gotten = kfifo_get(&mf_cpu->fifo, &entry);
spin_unlock_irqrestore(&mf_cpu->lock, proc_flags);
if (!gotten)
break;
if (entry.flags & MF_SOFT_OFFLINE)
soft_offline_page(pfn_to_page(entry.pfn), entry.flags);
else
memory_failure(entry.pfn, entry.flags);
}
}
AFAICS, for hard-offline case, a recovered event would be if:
- the page to shut down is already free
- the page was unmapped
In some cases we need to kill the process if it holds dirty pages.
But we never migrate contents in hard-offline path.
I guess it is because we cannot really trust the contents anymore.
--
Oscar Salvador
SUSE L3
Powered by blists - more mailing lists