[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <28c262360906180631i25ea6a18mbdc5be31c2346c04@mail.gmail.com>
Date: Thu, 18 Jun 2009 22:31:52 +0900
From: Minchan Kim <minchan.kim@...il.com>
To: Wu Fengguang <fengguang.wu@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Andi Kleen <ak@...ux.intel.com>, Ingo Molnar <mingo@...e.hu>,
Mel Gorman <mel@....ul.ie>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Nick Piggin <npiggin@...e.de>,
Hugh Dickins <hugh.dickins@...cali.co.uk>,
Andi Kleen <andi@...stfloor.org>,
"riel@...hat.com" <riel@...hat.com>,
"chris.mason@...cle.com" <chris.mason@...cle.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [PATCH 09/22] HWPOISON: Handle hardware poisoned pages in
try_to_unmap
On Thu, Jun 18, 2009 at 9:14 PM, Wu Fengguang<fengguang.wu@...el.com> wrote:
> On Wed, Jun 17, 2009 at 10:03:37PM +0800, Minchan Kim wrote:
>> On Wed, Jun 17, 2009 at 10:55 PM, Wu Fengguang<fengguang.wu@...el.com> wrote:
>> > On Wed, Jun 17, 2009 at 09:44:39PM +0800, Minchan Kim wrote:
>> >> It is private mail for my question.
>> >> I don't want to make noise in LKML.
>> >> And I don't want to disturb your progress to merge HWPoison.
>> >>
>> >> > Because this race window is small enough:
>> >> >
>> >> > TestSetPageHWPoison(p);
>> >> > lock_page(page);
>> >> > try_to_unmap(page, TTU_MIGRATION|...);
>> >> > lock_page_nosync(p);
>> >> >
>> >> > such small race windows can be found all over the kernel, it's just
>> >> > insane to try to fix any of them.
>> >>
>> >> I don't know there are intentional small race windows in kernel until you said.
>> >> I thought kernel code is perfect so it wouldn't allow race window
>> >> although it is very small. But you pointed out. Until now, My thought
>> >> is wrong.
>> >>
>> >> Do you know else small race windows by intention ?
>> >> If you know it, tell me, please. It can expand my sight. :)
>> >
>> > The memory failure code does not aim to rescue 100% page corruptions.
>> > That's unreasonable goal - the kernel pages, slab pages (including the
>> > big dcache/icache) are almost impossible to isolate.
>> >
>> > Comparing to the big slab pools, the migration and other race windows are
>> > really too small to care about :)
>>
>> Also, If you will mention this contents as annotation, I will add my
>> review sign.
>
> Good suggestion. Here is a patch for comment updates.
>
>> Thanks for kind reply for my boring discussion.
>
> Boring? Not at all :)
>
> Thanks,
> Fengguang
>
> ---
> mm/memory-failure.c | 76 +++++++++++++++++++++++++-----------------
> 1 file changed, 47 insertions(+), 29 deletions(-)
>
> --- sound-2.6.orig/mm/memory-failure.c
> +++ sound-2.6/mm/memory-failure.c
> @@ -1,4 +1,8 @@
> /*
> + * linux/mm/memory-failure.c
> + *
> + * High level machine check handler.
> + *
> * Copyright (C) 2008, 2009 Intel Corporation
> * Authors: Andi Kleen, Fengguang Wu
> *
> @@ -6,29 +10,36 @@
> * the GNU General Public License ("GPL") version 2 only as published by the
> * Free Software Foundation.
> *
> - * High level machine check handler. Handles pages reported by the
> - * hardware as being corrupted usually due to a 2bit ECC memory or cache
> - * failure.
> - *
> - * This focuses on pages detected as corrupted in the background.
> - * When the current CPU tries to consume corruption the currently
> - * running process can just be killed directly instead. This implies
> - * that if the error cannot be handled for some reason it's safe to
> - * just ignore it because no corruption has been consumed yet. Instead
> - * when that happens another machine check will happen.
> - *
> - * Handles page cache pages in various states. The tricky part
> - * here is that we can access any page asynchronous to other VM
> - * users, because memory failures could happen anytime and anywhere,
> - * possibly violating some of their assumptions. This is why this code
> - * has to be extremely careful. Generally it tries to use normal locking
> - * rules, as in get the standard locks, even if that means the
> - * error handling takes potentially a long time.
> - *
> - * The operation to map back from RMAP chains to processes has to walk
> - * the complete process list and has non linear complexity with the number
> - * mappings. In short it can be quite slow. But since memory corruptions
> - * are rare we hope to get away with this.
> + * Pages are reported by the hardware as being corrupted usually due to a
> + * 2bit ECC memory or cache failure. Machine check can either be raised when
> + * corruption is found in background memory scrubbing, or when someone tries to
> + * consume the corruption. This code focuses on the former case. If it cannot
> + * handle the error for some reason it's safe to just ignore it because no
> + * corruption has been consumed yet. Instead when that happens another (deadly)
> + * machine check will happen.
> + *
> + * The tricky part here is that we can access any page asynchronous to other VM
> + * users, because memory failures could happen anytime and anywhere, possibly
> + * violating some of their assumptions. This is why this code has to be
> + * extremely careful. Generally it tries to use normal locking rules, as in get
> + * the standard locks, even if that means the error handling takes potentially
> + * a long time.
> + *
> + * We don't aim to rescue 100% corruptions. That's unreasonable goal - the
> + * kernel text and slab pages (including the big dcache/icache) are almost
> + * impossible to isolate. We also try to keep the code clean by ignoring the
> + * other thousands of small corruption windows.
other thousands of small corruption windows(ex, migration, ...)
As far as you know , please write down them.
Anyway, I already added my sign.
Thanks for your effort never get exhausted. :)
--
Kinds regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists