[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110523110151.GD24674@elte.hu>
Date: Mon, 23 May 2011 13:01:51 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Huang Ying <ying.huang@...el.com>
Cc: huang ying <huang.ying.caritas@...il.com>,
Len Brown <lenb@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andi Kleen <andi@...stfloor.org>,
"Luck, Tony" <tony.luck@...el.com>,
"linux-acpi@...r.kernel.org" <linux-acpi@...r.kernel.org>,
Andi Kleen <ak@...ux.intel.com>,
"Wu, Fengguang" <fengguang.wu@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Borislav Petkov <bp@...en8.de>
Subject: Re: [PATCH 5/9] HWPoison: add memory_failure_queue()
* Huang Ying <ying.huang@...el.com> wrote:
> > That's where 'active filters' come into the picture - see my other mail
> > (that was in the context of unidentified NMI errors/events) where i
> > outlined how they would work in this case and elsewhere. Via active filters
> > we could share most of the code, gain access to the events and still have
> > kernel driven policy action.
>
> Is that something as follow?
>
> - NMI handler run for the hardware error, where hardware error
> information is collected and put into perf ring buffer as 'event'.
Correct.
Note that for MCE errors we want the 'persistent event' framework Boris has
posted: we want these events to be buffered up to a point even if there is no
tool listening in on them:
- this gives us boot-time MCE error coverage
- this protects us against a logging daemon being restarted and events
getting lost
> - Some 'active filters' are run for each 'event' in NMI context.
Yeah. Whether it's a human-ASCII space 'filter' or really just a callback you
register with that event is secondary - both would work.
> - Some operations can not be done in NMI handler, so they are delayed to
> an IRQ handler (can be done with something like irq_work).
Yes.
> - Some other 'active filters' are run for each 'event' in IRQ context.
> (For memory error, we can call memory_failure_queue() here).
Correct.
> Where some 'active filters' are kernel built-in, some 'active filters' can be
> customized via kernel command line or by user space.
Yes.
> If my understanding as above is correct, I think this is a general and
> complex solution. It is a little hard for user to understand which 'active
> filters' are in effect. He may need some runtime assistant to understand the
> code (maybe /sys/events/active_filters, which list all filters in effect
> now), because that is hard only by reading the source code. Anyway, this is
> a design style choice.
I don't think it's complex: the built-in rules are in plain sight (can be in
the source code or can even be explicitly registered callbacks), the
configuration/tooling installed rules will be as complex as the admin or tool
wants them to be.
> There are still some issues, I don't know how to solve in above framework.
>
> - If there are two processes request the same type of hardware error
> events. One hardware error event will be copied to two ring buffers (each
> for one process), but the 'active filters' should be run only once for each
> hardware error event.
With persistent events 'active filters' should only be attached to the central
persistent event.
> - How to deal with ring-buffer overflow? For example, there is full of
> corrected memory error in ring-buffer, and now a recoverable memory error
> occurs but it can not be put into perf ring buffer because of ring-buffer
> overflow, how to deal with the recoverable memory error?
The solution is to make it large enough. With *every* queueing solution there
will be some sort of queue size limit.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists