[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4DF63B7A.1030805@redhat.com>
Date: Mon, 13 Jun 2011 19:31:54 +0300
From: Avi Kivity <avi@...hat.com>
To: Borislav Petkov <bp@...64.org>
CC: Tony Luck <tony.luck@...el.com>, Ingo Molnar <mingo@...e.hu>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Huang, Ying" <ying.huang@...el.com>,
Hidetoshi Seto <seto.hidetoshi@...fujitsu.com>
Subject: Re: [PATCH 08/10] NOTIFIER: Take over TIF_MCE_NOTIFY and implement
task return notifier
On 06/13/2011 06:12 PM, Borislav Petkov wrote:
> > The best you can do is IPI everyone as soon as you've caught the #MC,
> > but you have to be prepared for multiple #MC for the same page. Once
> > you have that, global synchronization is not so important anymore.
>
> Yeah, in the multiple #MC case the memory_failure() thing should
> probably be made reentrant-safe (if it is not yet). And in that case,
> we'll be starting a worker thread on each CPU that caused an MCE from
> accessing that page. The thread that manages to clear all the mappings
> of our page simply does so while the others should be able to 'see' that
> there's no work to be done anymore (PFN is not mapped in the pagetables
> anymore) and exit without doing anything. Yeah, sounds doable with the
> irq_work_queue -> user_return_notifier flow.
I don't think a user_return_notifier is needed here. You don't just
want to do things before a userspace return, you also want to do them
soon. A user return notifier might take a very long time to run, if a
context switch occurs to a thread that spends a lot of time in the
kernel (perhaps a realtime thread).
So I think the best choice here is MCE -> irq_work -> realtime kernel
thread (or work queue)
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists