lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250220121915.00001391@huawei.com>
Date: Thu, 20 Feb 2025 12:19:15 +0000
From: Jonathan Cameron <Jonathan.Cameron@...wei.com>
To: Borislav Petkov <bp@...en8.de>
CC: Shiju Jose <shiju.jose@...wei.com>, "linux-edac@...r.kernel.org"
	<linux-edac@...r.kernel.org>, "linux-cxl@...r.kernel.org"
	<linux-cxl@...r.kernel.org>, "linux-acpi@...r.kernel.org"
	<linux-acpi@...r.kernel.org>, "linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"tony.luck@...el.com" <tony.luck@...el.com>, "rafael@...nel.org"
	<rafael@...nel.org>, "lenb@...nel.org" <lenb@...nel.org>,
	"mchehab@...nel.org" <mchehab@...nel.org>, "dan.j.williams@...el.com"
	<dan.j.williams@...el.com>, "dave@...olabs.net" <dave@...olabs.net>,
	"dave.jiang@...el.com" <dave.jiang@...el.com>, "alison.schofield@...el.com"
	<alison.schofield@...el.com>, "vishal.l.verma@...el.com"
	<vishal.l.verma@...el.com>, "ira.weiny@...el.com" <ira.weiny@...el.com>,
	"david@...hat.com" <david@...hat.com>, "Vilas.Sridharan@....com"
	<Vilas.Sridharan@....com>, "leo.duran@....com" <leo.duran@....com>,
	"Yazen.Ghannam@....com" <Yazen.Ghannam@....com>, "rientjes@...gle.com"
	<rientjes@...gle.com>, "jiaqiyan@...gle.com" <jiaqiyan@...gle.com>,
	"Jon.Grimm@....com" <Jon.Grimm@....com>, "dave.hansen@...ux.intel.com"
	<dave.hansen@...ux.intel.com>, "naoya.horiguchi@....com"
	<naoya.horiguchi@....com>, "james.morse@....com" <james.morse@....com>,
	"jthoughton@...gle.com" <jthoughton@...gle.com>, "somasundaram.a@....com"
	<somasundaram.a@....com>, "erdemaktas@...gle.com" <erdemaktas@...gle.com>,
	"pgonda@...gle.com" <pgonda@...gle.com>, "duenwen@...gle.com"
	<duenwen@...gle.com>, "gthelen@...gle.com" <gthelen@...gle.com>,
	"wschwartz@...erecomputing.com" <wschwartz@...erecomputing.com>,
	"dferguson@...erecomputing.com" <dferguson@...erecomputing.com>,
	"wbs@...amperecomputing.com" <wbs@...amperecomputing.com>,
	"nifan.cxl@...il.com" <nifan.cxl@...il.com>, tanxiaofei
	<tanxiaofei@...wei.com>, "Zengtao (B)" <prime.zeng@...ilicon.com>, "Roberto
 Sassu" <roberto.sassu@...wei.com>, "kangkang.shen@...urewei.com"
	<kangkang.shen@...urewei.com>, wanghuiqiang <wanghuiqiang@...wei.com>,
	Linuxarm <linuxarm@...wei.com>, Vandana Salve <vsalve@...ron.com>, "Steven
 Rostedt" <rostedt@...dmis.org>
Subject: Re: [PATCH v18 04/19] EDAC: Add memory repair control feature

On Wed, 19 Feb 2025 19:45:33 +0100
Borislav Petkov <bp@...en8.de> wrote:

> On Tue, Feb 18, 2025 at 04:51:25PM +0000, Jonathan Cameron wrote:
> > As a side note, if you are in the situation where the device can do
> > memory repair without any disruption of memory access then my
> > assumption is in the case where the device would set the maintenance
> > needed + where it is considering soft repair (so no long term cost
> > to a wrong decision) then the device would probably just do it
> > autonomously and at most we might get a notification.  
> 
> And this is basically what I'm trying to hint at: if you can do recovery
> action without userspace involvement, then please, by all means. There's no
> need to noodle information back'n'forth through user if the kernel or the
> device itself even, can handle it on its own.
> 
> More involved stuff should obviously rely on userspace to do more involved
> "pondering."

Lets explore this further as a follow up. A policy switch to let the kernel
do the 'easy' stuff (assuming device didn't do it) makes sense if this
particular combination is common.

> 
> > So I think that if we see this there will be some disruption.
> > Latency spikes for soft repair or we are looking at hard repair.
> > In that case we'd need policy on whether to repair at all.
> > In general the rasdaemon handling in that series is intentionally
> > simplistic. Real solutions will take time to refine but they
> > don't need changes to the kernel interface, just when to poke it.  
> 
> I hope so.
> 
> > The error record comes out as a trace point. Is there any precedence for
> > injecting those back into the kernel?   
> 
> I'm just questioning the whole interface and its usability. Not saying it
> doesn't make sense - we're simply weighing all options here.
> 
> > That policy question is a long term one but I can suggest 'possible' policies
> > that might help motivate the discussion
> >
> > 1. Repair may be very disruptive to memory latency. Delay until a maintenance
> >    window when latency spike is accepted by the customer until then rely on
> >    maintenance needed still representing a relatively low chance of failure.  
> 
> So during the maintenance window, the operator is supposed to do
> 
> rasdaemon --start-expensive-repair-operations

Yes, would be something along those lines.  Or a script very similar to the
the boot one Shiju wrote.  Scan the DB and find what needs repairing + do so.

> 
> ?
> 
> > 2. Hard repair uses known limited resources - e.g. those are known to match up
> >    to a particular number of rows in each module. That is not discoverable under
> >    the CXL spec so would have to come from another source of metadata.
> >    Apply some sort of fall off function so that we repair only the very worst
> >    cases as we run out. Alternative is always soft offline the memory in the OS,
> >    aim is to reduce chance of having to do that a somewhat optimal fashion.
> >    I'm not sure on the appropriate stats, maybe assume a given granual failure
> >    rate follows a Poison distribution and attempt to estimate lambda?  Would
> >    need an expert in appropriate failure modes or a lot of data to define
> >    this!  
> 
> I have no clue what you're saying here. :-)

I'll write something up at some point as it's definitely a complex
topic and I need to find a statistician + hardware folk with error models to
help flesh it out. 

There is another topic to look at which is what to do with synchronous poison
if we can repair the memory and bring it back into use.
I can't find the thread, but last time I asked about recovering from that, the
mm folk said they'd need to see the code + usecases (fair enough!).

> 
> > It is the simplest interface that we have come up with so far. I'm fully open
> > to alternatives that provide a clean way to get this data back into the
> > kernel and play well with existing logging tooling (e.g. rasdaemon)
> > 
> > Some things we could do,
> > * Store binary of trace event and reinject. As above + we would have to be
> >   very careful that any changes to the event are made with knowledge that
> >   we need to handle this path.  Little or now marshaling / formatting code
> >   in userspace, but new logging infrastructure needed + a chardev /ioctl
> >   to inject the data and a bit of userspace glue to talk to it.
> > * Reinject a binary representation we define, via an ioctl on some
> >   chardev we create for the purpose.  Userspace code has to take
> >   key value pairs and process them into this form.  So similar amount
> >   of marshaling code to what we have for sysfs.
> > * Or what we currently propose, write set of key value pairs to a simple
> >   (though multifile) sysfs interface. As you've noted marshaling is needed.  
> 
> ... and the advantage of having such a sysfs interface: it is human readable
> and usable vs having to use a tool to create a binary blob in a certain
> format...
> 
> Ok, then. Let's give that API a try... I guess I need to pick up the EDAC
> patches from here:
> 
> https://lore.kernel.org/r/20250212143654.1893-1-shiju.jose@huawei.com
> 
> If so, there's an EDAC patch 14 which is not together with the first 4. And
> I was thinking of taking the first 4 or 5 and then giving other folks an
> immutable branch in the EDAC tree which they can use to base the CXL stuff on
> top.
> 
> What's up?

My fault. I asked Shiju to split the more complex ABI for sparing out
to build the complexity up rather than having it all in one patch.

Should be fine for you to take 1-4 and 14 which is all the EDAC parts.

For 5 and 6 Rafael acked the ACPI part (5), and the ACPI ras2 scrub driver
has no other dependencies so I think that should go through your
tree as well, though no need to be in the immutable branch.

Dave Jiang can work his magic on the CXL stuff on top of a merge of your
immutable branch.

Thanks!

Jonathan
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ