[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090618121430.GA6746@localhost>
Date: Thu, 18 Jun 2009 20:14:30 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Minchan Kim <minchan.kim@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Andi Kleen <ak@...ux.intel.com>, Ingo Molnar <mingo@...e.hu>,
Mel Gorman <mel@....ul.ie>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Nick Piggin <npiggin@...e.de>,
Hugh Dickins <hugh.dickins@...cali.co.uk>,
Andi Kleen <andi@...stfloor.org>,
"riel@...hat.com" <riel@...hat.com>,
"chris.mason@...cle.com" <chris.mason@...cle.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [PATCH 09/22] HWPOISON: Handle hardware poisoned pages in
try_to_unmap
On Wed, Jun 17, 2009 at 10:03:37PM +0800, Minchan Kim wrote:
> On Wed, Jun 17, 2009 at 10:55 PM, Wu Fengguang<fengguang.wu@...el.com> wrote:
> > On Wed, Jun 17, 2009 at 09:44:39PM +0800, Minchan Kim wrote:
> >> It is private mail for my question.
> >> I don't want to make noise in LKML.
> >> And I don't want to disturb your progress to merge HWPoison.
> >>
> >> > Because this race window is small enough:
> >> >
> >> > TestSetPageHWPoison(p);
> >> > lock_page(page);
> >> > try_to_unmap(page, TTU_MIGRATION|...);
> >> > lock_page_nosync(p);
> >> >
> >> > such small race windows can be found all over the kernel, it's just
> >> > insane to try to fix any of them.
> >>
> >> I don't know there are intentional small race windows in kernel until you said.
> >> I thought kernel code is perfect so it wouldn't allow race window
> >> although it is very small. But you pointed out. Until now, My thought
> >> is wrong.
> >>
> >> Do you know else small race windows by intention ?
> >> If you know it, tell me, please. It can expand my sight. :)
> >
> > The memory failure code does not aim to rescue 100% page corruptions.
> > That's unreasonable goal - the kernel pages, slab pages (including the
> > big dcache/icache) are almost impossible to isolate.
> >
> > Comparing to the big slab pools, the migration and other race windows are
> > really too small to care about :)
>
> Also, If you will mention this contents as annotation, I will add my
> review sign.
Good suggestion. Here is a patch for comment updates.
> Thanks for kind reply for my boring discussion.
Boring? Not at all :)
Thanks,
Fengguang
---
mm/memory-failure.c | 76 +++++++++++++++++++++++++-----------------
1 file changed, 47 insertions(+), 29 deletions(-)
--- sound-2.6.orig/mm/memory-failure.c
+++ sound-2.6/mm/memory-failure.c
@@ -1,4 +1,8 @@
/*
+ * linux/mm/memory-failure.c
+ *
+ * High level machine check handler.
+ *
* Copyright (C) 2008, 2009 Intel Corporation
* Authors: Andi Kleen, Fengguang Wu
*
@@ -6,29 +10,36 @@
* the GNU General Public License ("GPL") version 2 only as published by the
* Free Software Foundation.
*
- * High level machine check handler. Handles pages reported by the
- * hardware as being corrupted usually due to a 2bit ECC memory or cache
- * failure.
- *
- * This focuses on pages detected as corrupted in the background.
- * When the current CPU tries to consume corruption the currently
- * running process can just be killed directly instead. This implies
- * that if the error cannot be handled for some reason it's safe to
- * just ignore it because no corruption has been consumed yet. Instead
- * when that happens another machine check will happen.
- *
- * Handles page cache pages in various states. The tricky part
- * here is that we can access any page asynchronous to other VM
- * users, because memory failures could happen anytime and anywhere,
- * possibly violating some of their assumptions. This is why this code
- * has to be extremely careful. Generally it tries to use normal locking
- * rules, as in get the standard locks, even if that means the
- * error handling takes potentially a long time.
- *
- * The operation to map back from RMAP chains to processes has to walk
- * the complete process list and has non linear complexity with the number
- * mappings. In short it can be quite slow. But since memory corruptions
- * are rare we hope to get away with this.
+ * Pages are reported by the hardware as being corrupted usually due to a
+ * 2bit ECC memory or cache failure. Machine check can either be raised when
+ * corruption is found in background memory scrubbing, or when someone tries to
+ * consume the corruption. This code focuses on the former case. If it cannot
+ * handle the error for some reason it's safe to just ignore it because no
+ * corruption has been consumed yet. Instead when that happens another (deadly)
+ * machine check will happen.
+ *
+ * The tricky part here is that we can access any page asynchronous to other VM
+ * users, because memory failures could happen anytime and anywhere, possibly
+ * violating some of their assumptions. This is why this code has to be
+ * extremely careful. Generally it tries to use normal locking rules, as in get
+ * the standard locks, even if that means the error handling takes potentially
+ * a long time.
+ *
+ * We don't aim to rescue 100% corruptions. That's unreasonable goal - the
+ * kernel text and slab pages (including the big dcache/icache) are almost
+ * impossible to isolate. We also try to keep the code clean by ignoring the
+ * other thousands of small corruption windows.
+ *
+ * When the corrupted page data is not recoverable, the tasks mapped the page
+ * have to be killed. We offer two kill options:
+ * - early kill with SIGBUS.BUS_MCEERR_AO (optional)
+ * - late kill with SIGBUS.BUS_MCEERR_AR (mandatory)
+ * A task will be early killed as soon as corruption is found in its virtual
+ * address space, if it has called prctl(PR_MEMORY_FAILURE_EARLY_KILL, 1, ...);
+ * Any task will be late killed when it tries to access its corrupted virtual
+ * address. The early kill option offers KVM or other apps with large caches an
+ * opportunity to isolate the corrupted page from its internal cache, so as to
+ * avoid being late killed.
*/
/*
@@ -275,6 +286,12 @@ static void collect_procs_file(struct pa
vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff,
pgoff)
+ /*
+ * Send early kill signal to tasks whose vma covers
+ * the page but not necessarily mapped it in its pte.
+ * Applications who requested early kill normally want
+ * to be informed of such data corruptions.
+ */
if (vma->vm_mm == tsk->mm)
add_to_kill(tsk, page, vma, to_kill, tkc);
}
@@ -284,6 +301,12 @@ static void collect_procs_file(struct pa
/*
* Collect the processes who have the corrupted page mapped to kill.
+ *
+ * The operation to map back from RMAP chains to processes has to walk
+ * the complete process list and has non linear complexity with the number
+ * mappings. In short it can be quite slow. But since memory corruptions
+ * are rare and only tasks flagged PF_EARLY_KILL will be searched, we hope to
+ * get away with this.
*/
static void collect_procs(struct page *page, struct list_head *tokill)
{
@@ -439,7 +462,7 @@ static int me_pagecache_dirty(struct pag
* Dirty swap cache page is tricky to handle. The page could live both in page
* cache and swap cache(ie. page is freshly swapped in). So it could be
* referenced concurrently by 2 types of PTEs:
- * normal PTEs and swap PTEs. We try to handle them consistently by calling u
+ * normal PTEs and swap PTEs. We try to handle them consistently by calling
* try_to_unmap(TTU_IGNORE_HWPOISON) to convert the normal PTEs to swap PTEs,
* and then
* - clear dirty bit to prevent IO
@@ -647,11 +670,6 @@ static void hwpoison_user_mappings(struc
* mapped. This has to be done before try_to_unmap,
* because ttu takes the rmap data structures down.
*
- * This also has the side effect to propagate the dirty
- * bit from PTEs into the struct page. This is needed
- * to actually decide if something needs to be killed
- * or errored, or if it's ok to just drop the page.
- *
* Error handling: We ignore errors here because
* there's nothing that can be done.
*/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists