lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110522132515.GA13078@elte.hu>
Date:	Sun, 22 May 2011 15:25:15 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	huang ying <huang.ying.caritas@...il.com>
Cc:	Huang Ying <ying.huang@...el.com>, Len Brown <lenb@...nel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Andi Kleen <andi@...stfloor.org>,
	"Luck, Tony" <tony.luck@...el.com>,
	"linux-acpi@...r.kernel.org" <linux-acpi@...r.kernel.org>,
	Andi Kleen <ak@...ux.intel.com>,
	"Wu, Fengguang" <fengguang.wu@...el.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Borislav Petkov <bp@...en8.de>
Subject: Re: [PATCH 5/9] HWPoison: add memory_failure_queue()


* huang ying <huang.ying.caritas@...il.com> wrote:

> On Sun, May 22, 2011 at 6:00 PM, Ingo Molnar <mingo@...e.hu> wrote:
> >
> > * huang ying <huang.ying.caritas@...il.com> wrote:
> >
> >> On Fri, May 20, 2011 at 7:56 PM, Ingo Molnar <mingo@...e.hu> wrote:
> >> >
> >> > * Huang Ying <ying.huang@...el.com> wrote:
> >> >
> >> >> > So why are we not working towards integrating this into our event
> >> >> > reporting/handling framework, as i suggested it from day one on when you
> >> >> > started posting these patches?
> >> >>
> >> >> The memory_failure_queue() introduced in this patch is general, that is, it
> >> >> can be used not only by ACPI/APEI, but also any other hardware error
> >> >> handlers, including your event reporting/handling framework.
> >> >
> >> > Well, the bit you are steadfastly ignoring is what i have made clear well
> >> > before you started adding these facilities: THEY ALREADY EXISTS to a large
> >> > degree :-)
> >> >
> >> > So you were and are duplicating code instead of using and extending existing
> >> > event processing facilities. It does not matter one little bit that the code
> >> > you added is partly 'generic', it's still overlapping and duplicated.
> >>
> >> How to do hardware error recovering in your perf framework?  IMHO, it can be
> >> something as follow:
> >>
> >> - NMI handler run for the hardware error, where hardware error
> >> information is collected and put into a ring buffer, an irq_work is
> >> triggered for further work
> >> - In irq_work handler, memory_failure_queue() is called to do the real
> >> recovering work for recoverable memory error in ring buffer.
> >>
> >> What's your idea about hardware error recovering in perf?
> >
> > The first step, the whole irq_work and ring buffer already looks largely
> > duplicated: you can collect into a perf event ring-buffer from NMI context like
> > the regular perf events do.
> 
> Why duplicated? perf uses the general irq_work too.

Yes, of course, because - if you still remember - Peter split irq_work out of 
perf events:

 e360adbe2924: irq_work: Add generic hardirq context callbacks

 |
 | Perf currently has such a mechanism, so extract that and provide it as a 
 | generic feature, independent of perf so that others may also benefit.
 |

:-)

But in hindsight the level of abstraction (for this usecase) was set too low, 
because we lose wider access to the actual events themselves:

> > The generalization that *would* make sense is not at the irq_work level 
> > really, instead we could generalize a 'struct event' for kernel internal 
> > producers and consumers of events that have no explicit PMU connection.
> >
> > This new 'struct event' would be slimmer and would only contain the fields 
> > and features that generic event consumers and producers need. Tracing 
> > events could be updated to use these kinds of slimmer events.
> >
> > It would still plug nicely into existing event ABIs, would work with event 
> > filters, etc. so the tooling side would remain focused and unified.
> >
> > Something like that. It is rather clear by now that splitting out irq_work 
> > was a mistake. But mistakes can be fixed and some really nice code could 
> > come out of it! Would you be interested in looking into this?
> 
> Yes.  This can transfer hardware error data from kernel to user space. Then, 
> how to do hardware error recovering in this big picture?  IMHO, we will need 
> to call something like memory_failure_queue() in IRQ context for memory 
> error.

That's where 'active filters' come into the picture - see my other mail (that 
was in the context of unidentified NMI errors/events) where i outlined how they 
would work in this case and elsewhere. Via active filters we could share most 
of the code, gain access to the events and still have kernel driven policy 
action.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ