[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3ea28fe1-1828-1017-fa0f-da626d773440@intel.com>
Date: Mon, 28 Jan 2019 08:50:49 -0800
From: Dave Hansen <dave.hansen@...el.com>
To: Balbir Singh <bsingharora@...il.com>,
Dave Hansen <dave.hansen@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org, thomas.lendacky@....com,
mhocko@...e.com, linux-nvdimm@...ts.01.org, tiwai@...e.de,
ying.huang@...el.com, linux-mm@...ck.org, jglisse@...hat.com,
bp@...e.de, baiyaowei@...s.chinamobile.com, zwisler@...nel.org,
bhelgaas@...gle.com, fengguang.wu@...el.com,
akpm@...ux-foundation.org
Subject: Re: [PATCH 0/5] [v4] Allow persistent memory to be used like normal
RAM
On 1/28/19 3:09 AM, Balbir Singh wrote:
>> This is intended for Intel-style NVDIMMs (aka. Intel Optane DC
>> persistent memory) NVDIMMs. These DIMMs are physically persistent,
>> more akin to flash than traditional RAM. They are also expected to
>> be more cost-effective than using RAM, which is why folks want this
>> set in the first place.
> What variant of NVDIMM's F/P or both?
I'd expect this to get used in any cases where the NVDIMM is
cost-effective vs. DRAM. Today, I think that's only NVDIMM-P. At least
from what Wikipedia tells me about F vs. P vs. N:
https://en.wikipedia.org/wiki/NVDIMM
>> == Patch Set Overview ==
>>
>> This series adds a new "driver" to which pmem devices can be
>> attached. Once attached, the memory "owned" by the device is
>> hot-added to the kernel and managed like any other memory. On
>> systems with an HMAT (a new ACPI table), each socket (roughly)
>> will have a separate NUMA node for its persistent memory so
>> this newly-added memory can be selected by its unique NUMA
>> node.
>
> NUMA is distance based topology, does HMAT solve these problems?
NUMA is no longer just distance-based. Any memory with different
properties, like different memory-side caches or bandwidth properties
can be in its own, discrete NUMA node.
> How do we prevent fallback nodes of normal nodes being pmem nodes?
NUMA policies.
> On an unexpected crash/failure is there a scrubbing mechanism
> or do we rely on the allocator to do the right thing prior to
> reallocating any memory.
Yes, but this is not unique to persistent memory. On a kexec-based
crash, there might be old, sensitive data in *RAM* when the kernel comes
up. We depend on the allocator to zero things there. We also just
plain depend on the allocator to zero things so we don't leak
information when recycling pages in the first place.
I can't think of a scenario where some kind of "leak" of old data
wouldn't also be a bug with normal, volatile RAM.
> Will frequent zero'ing hurt NVDIMM/pmem's life times?
Everybody reputable that sells things with limited endurance quantifies
the endurance. I'd suggest that folks know what the endurance of their
media is before enabling this.
Powered by blists - more mailing lists