[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190118114846.hmmcagscyjeycyfy@wfg-t540p.sh.intel.com>
Date: Fri, 18 Jan 2019 19:48:46 +0800
From: Fengguang Wu <fengguang.wu@...el.com>
To: Jeff Moyer <jmoyer@...hat.com>
Cc: Keith Busch <keith.busch@...el.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
thomas.lendacky@....com, dave@...1.net, linux-nvdimm@...ts.01.org,
tiwai@...e.de, zwisler@...nel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, mhocko@...e.com,
baiyaowei@...s.chinamobile.com, ying.huang@...el.com,
bhelgaas@...gle.com, akpm@...ux-foundation.org, bp@...e.de
Subject: Re: [PATCH 0/4] Allow persistent memory to be used like normal RAM
>With this patch set, an unmodified application would either use:
>
>1) whatever memory it happened to get
>2) only the faster dram (via numactl --membind=)
>3) only the slower pmem (again, via numactl --membind1)
>4) preferentially one or the other (numactl --preferred=)
Yet another option:
MemoryOptimizer -- hot page accounting and migration daemon
https://github.com/intel/memory-optimizer
Once PMEM NUMA nodes are available, we may run a user space daemon to
walk page tables of virtual machines (EPT) or processes, collect the
"accessed" bits to find out hot pages, and finally migrate hot pages
to DRAM and cold pages to PMEM.
In that scenario, only kernel and the migrate daemon need to be aware
of the PMEM nodes. Unmodified virtual machines and processes can enjoy
the added memory space w/o knowing whether it's using DRAM or PMEM.
Thanks,
Fengguang
Powered by blists - more mailing lists