[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100524162124.GB7145@sgi.com>
Date: Mon, 24 May 2010 11:21:24 -0500
From: Russ Anderson <rja@....com>
To: Andi Kleen <andi@...stfloor.org>
Cc: "Eric W. Biederman" <ebiederm@...ssion.com>,
Borislav Petkov <bp@...64.org>,
"Luck, Tony" <tony.luck@...el.com>,
Hidetoshi Seto <seto.hidetoshi@...fujitsu.com>,
Mauro Carvalho Chehab <mchehab@...hat.com>,
"Young, Brent" <brent.young@...el.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Matt Domsch <Matt_Domsch@...l.com>,
Doug Thompson <dougthompson@...ssion.com>,
Joe Perches <joe@...ches.com>, Ingo Molnar <mingo@...e.hu>,
"bluesmoke-devel@...ts.sourceforge.net"
<bluesmoke-devel@...ts.sourceforge.net>,
Linux Edac Mailing List <linux-edac@...r.kernel.org>,
rja@....com
Subject: Re: Hardware Error Kernel Mini-Summit
On Wed, May 19, 2010 at 11:03:24AM +0200, Andi Kleen wrote:
> Hi Eric,
>
> > I'm not ready to believe the average person that is running linux
> > is too stupid to understand the difference between a hardware
> > error and a software error.
>
> Experience disagrees with you (that is not sure about average,
> but at least there's a significant portion)
>
> Also again today there are other reasons for it.
I agree with Andi. While there are a wire range of users, the
vast majority know little about the hardware they are running
on. Even in commercial settings, where users/admins are better
educated, there is little time to do detailed error analysis.
The more errors are detected/analyzed/corrected/recovered, the
better it is for everyone.
> > > Really to do anything useful with them you need trends
> > > and automatic actions (like predictive page offlining)
> >
> > Not at all, and I don't have a clue where you start thinking
> > predictive page offlining makes the least bit of sense. Broken
> > or even weak bits are rarely the common reason for ECC errors.
>
> There are various studies that disagree with you on that.
Having the infrastructure to automatically off-line pages
is a good thing. The details of where to set the predictive
threshold likely will be hardware specific (different DIMM
types failing at different rates). It needs to be adjustable.
> > > A log isn't really a good format for that
> >
> > A log is a fine format for realizing you have a problem. A
>
> A low steady rate of corrected errors on a large system
> is expected. In fact if you look at the memory error log.
> of a large system (towards TBs) it nearly always has some
> memory related events.
Yes, there are certainly examples of that.
> In this case a log is not really useful. What you need
> is useful thresholds and a good summary.
The larger the system the more important a good summary is.
> > - Errors that occur frequently. That is broken hardware of one time or
> > another. I want to know about that so I can schedule down time to replace
> > my memory before I get an uncorrected ECC error. Errors of this kind
> > are likely happening frequently enough as to impact performance.
>
> Same issue here: if something is truly broken it floods
> you with errors.
>
> First this costs a lot of time to process and it does not
> actually tell you anything useful because most errors in a flood
> are similar.
>
> Basically you don't care if you have 100 or 1000 errors,
> and you definitely don't want all the of the errors filling up
> your disk and using up your CPU.
>
> Again a threshold with an action is much more useful here.
Yes, good points.
--
Russ Anderson, OS RAS/Partitioning Project Lead
SGI - Silicon Graphics Inc rja@....com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists