lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210401161237.GC28954@zn.tnic>
Date:   Thu, 1 Apr 2021 18:12:37 +0200
From:   Borislav Petkov <bp@...en8.de>
To:     William Roche <william.roche@...cle.com>
Cc:     linux-kernel@...r.kernel.org, Tony Luck <tony.luck@...el.com>,
        linux-edac@...r.kernel.org
Subject: Re: [PATCH v1] RAS/CEC: Memory Corrected Errors consistent event
 filtering

On Mon, Mar 29, 2021 at 11:44:05AM +0200, William Roche wrote:
> I totally agree with you, and in order to schedule a replacement, MCEs
> information (enriched by the notifiers chain) are more meaningful than
> only PFN values.

Well, if you want to collect errors and analyze patterns in order to
detect hw going bad, you're probably better off disabling the CEC
altogether - either disable it in Kconfig or boot with ras=cec_disable.

> 1/ Giving back ras_cec a consistent behavior where the first occurrence
> of a CE doesn't generate an MCE message from the MCE_HANDLED_CEC
> notifiers, and a consistent behavior between the slot 0 and the other
> pfn slots.

If by this you mean the issue with the return value, then sure.

If you mean something else, you'd have to be more specific.

> 2/ Give the CE MCE information when the action threshold is reached to
> help the administrator identify what generated the PFN "Soft-offlining"
> or "Invalid pfn" message.
> 
> When ras_cec is enabled it hides most of the CE errors, but when the
> action threshold is reached all notifiers can generate their indication
> about the error that appeared too often.
> 
> An administrator getting too many action threshold CE errors can
> schedule a replacement based on the indications provided by his EDAC
> module etc...

Well, this works probably only in theory.

First of all, the CEC sees the error first, before the EDAC drivers.

But, in order to map from the virtual address to the actual DIMM, you
need the EDAC drivers to have a go at the error. In many cases not even
the EDAC drivers can give you that mapping because, well, hw/fw does its
own stuff underneath, predictive fault bla, added value crap, whatever,
so that we can't even get a "DIMM X on processor Y caused the error."

I know, your assumption is that if a page gets offlined by the CEC, then
all the errors' addresses are coming from the same physical DIMM. And
that is probably correct in most cases but I'm not convinced for all.

In any case, what we could do - which is pretty easy and cheap - is to
fix the retval of cec_add_elem() to communicate to the caller that it
offlined a page and this way tell the notifier chain that the error
needs to be printed into dmesg with a statement sayin that DIMM Y got
just one more page offlined.

Over time, if a DIMM is going bad, one should be able to grep dmesg and
correlate all those offlined pages to DIMMs and then maybe see a pattern
and eventually schedule a downtime.

A lot of ifs, I know. :-\

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ