[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dc7d8190-2c94-9bdb-fb5b-a80a3fb55822@oracle.com>
Date: Fri, 25 Jan 2019 11:10:22 -0800
From: Jane Chu <jane.chu@...cle.com>
To: "Verma, Vishal L" <vishal.l.verma@...el.com>,
"Williams, Dan J" <dan.j.williams@...el.com>,
"Du, Fan" <fan.du@...el.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"bp@...e.de" <bp@...e.de>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"tiwai@...e.de" <tiwai@...e.de>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>,
"jglisse@...hat.com" <jglisse@...hat.com>,
"zwisler@...nel.org" <zwisler@...nel.org>,
"mhocko@...e.com" <mhocko@...e.com>,
"baiyaowei@...s.chinamobile.com" <baiyaowei@...s.chinamobile.com>,
"thomas.lendacky@....com" <thomas.lendacky@....com>,
"Wu, Fengguang" <fengguang.wu@...el.com>,
"Huang, Ying" <ying.huang@...el.com>,
"bhelgaas@...gle.com" <bhelgaas@...gle.com>
Subject: Re: [PATCH 5/5] dax: "Hotplug" persistent memory for use like normal
RAM
On 1/25/2019 10:20 AM, Verma, Vishal L wrote:
>
> On Fri, 2019-01-25 at 09:18 -0800, Dan Williams wrote:
>> On Fri, Jan 25, 2019 at 12:20 AM Du, Fan <fan.du@...el.com> wrote:
>>> Dan
>>>
>>> Thanks for the insights!
>>>
>>> Can I say, the UCE is delivered from h/w to OS in a single way in
>>> case of machine
>>> check, only PMEM/DAX stuff filter out UC address and managed in its
>>> own way by
>>> badblocks, if PMEM/DAX doesn't do so, then common RAS workflow will
>>> kick in,
>>> right?
>>
>> The common RAS workflow always kicks in, it's just the page state
>> presented by a DAX mapping needs distinct handling. Once it is
>> hot-plugged it no longer needs to be treated differently than "System
>> RAM".
>>
>>> And how about when ARS is involved but no machine check fired for
>>> the function
>>> of this patchset?
>>
>> The hotplug effectively disconnects this address range from the ARS
>> results. They will still be reported in the libnvdimm "region" level
>> badblocks instance, but there's no safe / coordinated way to go clear
>> those errors without additional kernel enabling. There is no "clear
>> error" semantic for "System RAM".
>>
> Perhaps as future enabling, the kernel can go perform "clear error" for
> offlined pages, and make them usable again. But I'm not sure how
> prepared mm is to re-accept pages previously offlined.
>
Offlining a DRAM backed page due to an UC makes sense because
a. the physical DRAM cell might still have an error
b. power cycle, scrubing could potentially 'repair' the DRAM cell,
making the page usable again.
But for a PMEM backed page, neither is true. If a poison bit is set in
a page, that indicates the underlying hardware has completed the repair
work, all that's left is for software to recover. Secondly, because
poison is persistent, unless software explicitly clear the bit,
the page is permanently unusable.
thanks,
-jane
Powered by blists - more mailing lists