[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <153074042316.27838.17319837331947007626.stgit@dwillia2-desk3.amr.corp.intel.com>
Date: Wed, 04 Jul 2018 14:40:23 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: linux-nvdimm@...ts.01.org
Cc: linux-edac@...r.kernel.org, Tony Luck <tony.luck@...el.com>,
Borislav Petkov <bp@...en8.de>,
Jérôme Glisse <jglisse@...hat.com>,
Jan Kara <jack@...e.cz>, "H. Peter Anvin" <hpa@...or.com>,
x86@...nel.org, Thomas Gleixner <tglx@...utronix.de>,
Christoph Hellwig <hch@....de>,
Ross Zwisler <ross.zwisler@...ux.intel.com>,
Matthew Wilcox <mawilcox@...rosoft.com>,
Ingo Molnar <mingo@...hat.com>, Michal Hocko <mhocko@...e.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Souptick Joarder <jrdr.linux@...il.com>, hch@....de,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, jack@...e.cz,
ross.zwisler@...ux.intel.com
Subject: [PATCH v5 00/11] mm: Teach memory_failure() about ZONE_DEVICE pages
Changes since v4 [1]:
* Rework dax_lock_page() to reuse get_unlocked_mapping_entry() (Jan)
* Change the calling convention to take a 'struct page *' and return
success / failure instead of performing the pfn_to_page() internal to
the api (Jan, Ross).
* Rename dax_lock_page() to dax_lock_mapping_entry() (Jan)
* Account for the case that a given pfn can be fsdax mapped with
different sizes in different vmas (Jan)
* Update collect_procs() to determine the mapping size of the pfn for
each page given it can be variable in the dax case.
[1]: https://lists.01.org/pipermail/linux-nvdimm/2018-June/016279.html
---
As it stands, memory_failure() gets thoroughly confused by dev_pagemap
backed mappings. The recovery code has specific enabling for several
possible page states and needs new enabling to handle poison in dax
mappings.
In order to support reliable reverse mapping of user space addresses:
1/ Add new locking in the memory_failure() rmap path to prevent races
that would typically be handled by the page lock.
2/ Since dev_pagemap pages are hidden from the page allocator and the
"compound page" accounting machinery, add a mechanism to determine the
size of the mapping that encompasses a given poisoned pfn.
3/ Given pmem errors can be repaired, change the speculatively accessed
poison protection, mce_unmap_kpfn(), to be reversible and otherwise
allow ongoing access from the kernel.
A side effect of this enabling is that MADV_HWPOISON becomes usable for
dax mappings, however the primary motivation is to allow the system to
survive userspace consumption of hardware-poison via dax. Specifically
the current behavior is:
mce: Uncorrected hardware memory error in user-access at af34214200
{1}[Hardware Error]: It has been corrected by h/w and requires no further action
mce: [Hardware Error]: Machine check events logged
{1}[Hardware Error]: event severity: corrected
Memory failure: 0xaf34214: reserved kernel page still referenced by 1 users
[..]
Memory failure: 0xaf34214: recovery action for reserved kernel page: Failed
mce: Memory error not recovered
<reboot>
...and with these changes:
Injecting memory failure for pfn 0x20cb00 at process virtual address 0x7f763dd00000
Memory failure: 0x20cb00: Killing dax-pmd:5421 due to hardware memory corruption
Memory failure: 0x20cb00: recovery action for dax page: Recovered
Given all the cross dependencies I propose taking this through
nvdimm.git with acks from Naoya, x86/core, x86/RAS, and of course dax
folks.
---
Dan Williams (11):
device-dax: Convert to vmf_insert_mixed and vm_fault_t
device-dax: Enable page_mapping()
device-dax: Set page->index
filesystem-dax: Set page->index
mm, madvise_inject_error: Let memory_failure() optionally take a page reference
mm, memory_failure: Collect mapping size in collect_procs()
filesystem-dax: Introduce dax_lock_mapping_entry()
mm, memory_failure: Teach memory_failure() about dev_pagemap pages
x86/mm/pat: Prepare {reserve,free}_memtype() for "decoy" addresses
x86/memory_failure: Introduce {set,clear}_mce_nospec()
libnvdimm, pmem: Restore page attributes when clearing errors
arch/x86/include/asm/set_memory.h | 42 ++++++
arch/x86/kernel/cpu/mcheck/mce-internal.h | 15 --
arch/x86/kernel/cpu/mcheck/mce.c | 38 -----
arch/x86/mm/pat.c | 16 ++
drivers/dax/device.c | 75 +++++++----
drivers/nvdimm/pmem.c | 26 ++++
drivers/nvdimm/pmem.h | 13 ++
fs/dax.c | 125 +++++++++++++++++-
include/linux/dax.h | 24 +++
include/linux/huge_mm.h | 5 -
include/linux/mm.h | 1
include/linux/set_memory.h | 14 ++
mm/huge_memory.c | 4 -
mm/madvise.c | 18 ++-
mm/memory-failure.c | 201 +++++++++++++++++++++++------
15 files changed, 483 insertions(+), 134 deletions(-)
Powered by blists - more mailing lists