lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190116181859.D1504459@viggo.jf.intel.com>
Date:   Wed, 16 Jan 2019 10:18:59 -0800
From:   Dave Hansen <dave.hansen@...ux.intel.com>
To:     dave@...1.net
Cc:     Dave Hansen <dave.hansen@...ux.intel.com>,
        dan.j.williams@...el.com, dave.jiang@...el.com, zwisler@...nel.org,
        vishal.l.verma@...el.com, thomas.lendacky@....com,
        akpm@...ux-foundation.org, mhocko@...e.com,
        linux-nvdimm@...ts.01.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, ying.huang@...el.com, fengguang.wu@...el.com,
        bp@...e.de, bhelgaas@...gle.com, baiyaowei@...s.chinamobile.com,
        tiwai@...e.de
Subject: [PATCH 0/4] Allow persistent memory to be used like normal RAM

I would like to get this queued up to get merged.  Since most of the
churn is in the nvdimm code, and it also depends on some refactoring
that only exists in the nvdimm tree, it seems like putting it in *via*
the nvdimm tree is the best path.

But, this series makes non-trivial changes to the "resource" code and
memory hotplug.  I'd really like to get some acks from folks on the
first three patches which affect those areas.

Borislav and Bjorn, you seem to be the most active in the resource code.

Michal, I'd really appreciate at look at all of this from a mem hotplug
perspective.

Note: these are based on commit d2f33c19644 in:

	git://git.kernel.org/pub/scm/linux/kernel/git/djbw/nvdimm.git libnvdimm-pending

Changes since v1:
 * Now based on git://git.kernel.org/pub/scm/linux/kernel/git/djbw/nvdimm.git
 * Use binding/unbinding from "dax bus" code
 * Move over to a "dax bus" driver from being an nvdimm driver

--

Persistent memory is cool.  But, currently, you have to rewrite
your applications to use it.  Wouldn't it be cool if you could
just have it show up in your system like normal RAM and get to
it like a slow blob of memory?  Well... have I got the patch
series for you!

This series adds a new "driver" to which pmem devices can be
attached.  Once attached, the memory "owned" by the device is
hot-added to the kernel and managed like any other memory.  On
systems with an HMAT (a new ACPI table), each socket (roughly)
will have a separate NUMA node for its persistent memory so
this newly-added memory can be selected by its unique NUMA
node.

Here's how I set up a system to test this thing:

1. Boot qemu with lots of memory: "-m 4096", for instance
2. Reserve 512MB of physical memory.  Reserving a spot a 2GB
   physical seems to work: memmap=512M!0x0000000080000000
   This will end up looking like a pmem device at boot.
3. When booted, convert fsdax device to "device dax":
	ndctl create-namespace -fe namespace0.0 -m dax
4. See patch 4 for instructions on binding the kmem driver
   to a device.
5. Now, online the new memory sections.  Perhaps:

grep ^MemTotal /proc/meminfo
for f in `grep -vl online /sys/devices/system/memory/*/state`; do
	echo $f: `cat $f`
	echo online_movable > $f
	grep ^MemTotal /proc/meminfo
done

Cc: Dan Williams <dan.j.williams@...el.com>
Cc: Dave Jiang <dave.jiang@...el.com>
Cc: Ross Zwisler <zwisler@...nel.org>
Cc: Vishal Verma <vishal.l.verma@...el.com>
Cc: Tom Lendacky <thomas.lendacky@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Michal Hocko <mhocko@...e.com>
Cc: linux-nvdimm@...ts.01.org
Cc: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org
Cc: Huang Ying <ying.huang@...el.com>
Cc: Fengguang Wu <fengguang.wu@...el.com>
Cc: Borislav Petkov <bp@...e.de>
Cc: Bjorn Helgaas <bhelgaas@...gle.com>
Cc: Yaowei Bai <baiyaowei@...s.chinamobile.com>
Cc: Takashi Iwai <tiwai@...e.de>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ