[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250109183448.000059ec@huawei.com>
Date: Thu, 9 Jan 2025 18:34:48 +0000
From: Jonathan Cameron <Jonathan.Cameron@...wei.com>
To: Borislav Petkov <bp@...en8.de>
CC: Shiju Jose <shiju.jose@...wei.com>, "linux-edac@...r.kernel.org"
<linux-edac@...r.kernel.org>, "linux-cxl@...r.kernel.org"
<linux-cxl@...r.kernel.org>, "linux-acpi@...r.kernel.org"
<linux-acpi@...r.kernel.org>, "linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"tony.luck@...el.com" <tony.luck@...el.com>, "rafael@...nel.org"
<rafael@...nel.org>, "lenb@...nel.org" <lenb@...nel.org>,
"mchehab@...nel.org" <mchehab@...nel.org>, "dan.j.williams@...el.com"
<dan.j.williams@...el.com>, "dave@...olabs.net" <dave@...olabs.net>,
"dave.jiang@...el.com" <dave.jiang@...el.com>, "alison.schofield@...el.com"
<alison.schofield@...el.com>, "vishal.l.verma@...el.com"
<vishal.l.verma@...el.com>, "ira.weiny@...el.com" <ira.weiny@...el.com>,
"david@...hat.com" <david@...hat.com>, "Vilas.Sridharan@....com"
<Vilas.Sridharan@....com>, "leo.duran@....com" <leo.duran@....com>,
"Yazen.Ghannam@....com" <Yazen.Ghannam@....com>, "rientjes@...gle.com"
<rientjes@...gle.com>, "jiaqiyan@...gle.com" <jiaqiyan@...gle.com>,
"Jon.Grimm@....com" <Jon.Grimm@....com>, "dave.hansen@...ux.intel.com"
<dave.hansen@...ux.intel.com>, "naoya.horiguchi@....com"
<naoya.horiguchi@....com>, "james.morse@....com" <james.morse@....com>,
"jthoughton@...gle.com" <jthoughton@...gle.com>, "somasundaram.a@....com"
<somasundaram.a@....com>, "erdemaktas@...gle.com" <erdemaktas@...gle.com>,
"pgonda@...gle.com" <pgonda@...gle.com>, "duenwen@...gle.com"
<duenwen@...gle.com>, "gthelen@...gle.com" <gthelen@...gle.com>,
"wschwartz@...erecomputing.com" <wschwartz@...erecomputing.com>,
"dferguson@...erecomputing.com" <dferguson@...erecomputing.com>,
"wbs@...amperecomputing.com" <wbs@...amperecomputing.com>,
"nifan.cxl@...il.com" <nifan.cxl@...il.com>, tanxiaofei
<tanxiaofei@...wei.com>, "Zengtao (B)" <prime.zeng@...ilicon.com>, "Roberto
Sassu" <roberto.sassu@...wei.com>, "kangkang.shen@...urewei.com"
<kangkang.shen@...urewei.com>, wanghuiqiang <wanghuiqiang@...wei.com>,
Linuxarm <linuxarm@...wei.com>
Subject: Re: [PATCH v18 04/19] EDAC: Add memory repair control feature
On Thu, 9 Jan 2025 17:19:02 +0100
Borislav Petkov <bp@...en8.de> wrote:
> On Thu, Jan 09, 2025 at 04:01:59PM +0000, Jonathan Cameron wrote:
> > Ok. To me the fact it's not a single write was relevant. Seems not
> > in your mental model of how this works. For me a single write
> > that you cannot query back is fine, setting lots of parameters and
> > being unable to query any of them less so. I guess you disagree.
>
> Why can't you query it back?
>
> grep -r . /sysfs/dir/
>
> All files' values have been previously set and should still be there on
> a read, I'd strongly hope. Your ->read routines should give the values back.
Today you can. Seems we are talking cross purposes.
I'm confused. I thought your proposal was for "bank" attribute to present an
allowed range on read.
"bank" attribute is currently written to and read back as the value of the bank on which
to conduct a repair. Maybe this disconnect is down to the fact max_ and min_
attributes should have been marked as RO in the docs. They aren't controls,
just presentation of limits to userspace.
Was intent a separate bank_range type attribute rather than max_bank, min_bank?
One of those would be absolutely fine (similar to the _available attributes
in IIO - I added those years ago to meet a similar need and we've never had
any issues with those).
>
> > In interests of progress I'm not going to argue further. No one is
> > going to use this interface by hand anyway so the lost of useability
> > I'm seeing doesn't matter a lot.
>
> I had the suspicion that this user interface is not really going to be used by
> a user but by a tool. But then if you don't have a tool, you're lost.
>
> This is one of the reasons why you can control ftrace directly on the shell
> too - without a tool. This is very useful in certain cases where you cannot
> run some userspace tools.
I fully agree. What I was saying was in response to me thinking you wanted it
to not be possible to read back the user set values (overlapping uses of
single bank attribute which wasn't what you meant). That is useful for a user
wanting to do the cat /sys/... that you mention above, but not vital if they are
directly reading the tracepoints for the error records and poking the
sysfs interface.
Given it seems I misunderstood that suggestion, ignore my reply to that
as irrelevant.
>
> > In at least the CXL case I'm fairly sure most of them are not discoverable.
> > Until you see errors you have no idea what the memory topology is.
>
> Ok.
>
> > For that you'd need to have a path to read back what happened.
>
> So how is this scrubbing going to work? You get an error, you parse it for all
> the attributes and you go and write those attributes into the scrub interface
> and it starts scrubbing?
Repair not scrubbing. They are different things we should keep separate,
scrub corrects the value, if it can, but doesn't change the underlying memory to
new memory cells to avoid repeated errors. Replacing scrub with repair
(which I think was the intent here)...
You get error records that describe the error seen in hardware, write back the
values into this interface and tell it to repair the memory. This is not
necessarily a synchronous or immediate thing - instead typically based on
trend analysis.
As an example, the decision might be that bit of ram threw up 3 errors
over a month including multiple system reboots (for other reasons) and
that is over some threshold so we use a spare memory line to replace it.
>
> But then why do you even need the interface at all?
>
> Why can't the kernel automatically collect all those attributes and start the
> scrubbing automatically - no need for any user interaction...?
>
> So why do you *actually* even need user interaction here and why can't the
> kernel be smart enough to start the scrub automatically?
Short answer, it needs to be very smart and there isn't a case of one size
fits all - hence suggested approach of making it a user space problem.
There are hardware autonomous solutions and ones handled by host firmware.
That is how repair is done in many servers - at most software sees a slightly
latency spike as the memory is repaired under the hood. Some CXL devices
will do this as well. Those CXL devices may provide an additional repair
interface for the less clear cut decisions that need more data processing
/ analysis than the device firmware is doing. Other CXL devices will take
the view the OS is best placed to make all the decisions - those sometimes
will give a 'maintenance needed' indication in the error records but that
is still a hint the host may or may not take any notice of.
Given in the systems being considered here, software is triggering the repair,
we want to allow for policy in the decision. In simple cases we could push
that policy into the kernel e.g. just repair the moment we see an error record.
These repair resources are very limited in number, so immediately repairing
may a bad idea. We want to build up a history of errors before making
such a decision. That can be done in kernel.
The decision to repair memory is heavily influenced by policy and time considerations
against device resource constraints.
Some options that are hard to do in kernel.
1. Typical asynchronous error report for a corrected error.
Tells us memory had an error (perhaps from a scrubbing engine on the device
running checks). No need to take action immediately. Instead build up more data
over time and if lots of errors occur make decision to repair as no we are sure it
is worth doing rather than a single random event. We may tune scrubbing engines
to check this memory more frequently and adjust our data analysis to take that
into account for setting thresholds etc.
When an admin considers it a good time to take action, offline the memory and
repair before bringing it back into use (sometimes by rebooting the machine).
Sometimes repair can be triggered in a software transparent way, sometimes not.
This also applies to uncorrectable errors though in that case you can't necessarily
repair it without ever seeing a synchronous poison with all the impacts that has.
2. Soft repair across boots. We are actually storing the error records, then only
applying the fix on reboot before using the memory - so maintaining a list
of bad memory and saving it to a file to read back on boot. We could provide
another kernel interface to get this info and reinject it after reboot instead
of doing it in userspace but that is another ABI to design.
3. Complex policy across fleets. A lot of work is going on around prediction techniques
that may change the local policy on each node dependent on the overall reliability
patterns of a particular batch of devices and local characteristics, service guarantees
etc. If it is hard repair, then once you've run out you need schedule an engineer
out to replace the DIMM. All complex inputs to the decision.
Similar cases like CPU offlining on repeated errors are done in userspace (e.g.
RAS Daemon) for similar reasons of long term data gathering and potentially
complex algorithms.
>
> > Ok. Then can we just drop the range discoverability entirely or we go with
> > your suggestion and do not support read back of what has been
> > requested but instead have the reads return a range if known or "" /
> > return -EONOTSUPP if simply not known?
>
> Probably.
Too many options in the above paragraph so just to check... Probably to which?
If it's a separate attribute from the one we write the control so then
we do what is already done here and don't present the interface at all if
the range isn't discoverable.
>
> > I can live with that though to me we are heading in the direction of
> > a less intuitive interface to save a small number of additional files.
>
> This is not the point. I already alluded to this earlier - we're talking about
> a user visible interface which, once it goes out, it is cast in stone forever.
>
> So those files better have a good reason to exist...
>
> And if we're not sure yet, we can upstream only those which are fine now and
> then continue discussing the rest.
Ok. Best path is drop the available range support then (so no min_ max_ or
anything to replace them for now).
Added bonus is we don't have to rush this conversation and can make sure we
come to the right solution driven by use cases.
Jonathan
> HTH.
>
Powered by blists - more mailing lists