[<prev] [next>] [day] [month] [year] [list]
Message-ID: <f0d66b0c-c9b6-a040-c485-1606041a70a2@linux.alibaba.com>
Date: Wed, 30 Jan 2019 09:26:41 -0800
From: Yang Shi <yang.shi@...ux.alibaba.com>
To: lsf-pc@...ts.linux-foundation.org, linux-mm@...ck.org,
linux-kernel <linux-kernel@...r.kernel.org>
Cc: mhocko@...e.com, hannes@...xchg.org, dan.j.williams@...el.com,
dave.hansen@...ux.intel.com, fengguang.wu@...el.com,
YangShi <yang.shi@...ux.alibaba.com>
Subject: [LSF/MM TOPIC] Use NVDIMM as NUMA node and NUMA API
Hi folks,
I would like to attend the LSF/MM Summit 2019. I'm interested in most MM
topics, particularly the NUMA API topic proposed by Jerome since it is
related to my below proposal.
I would like to share some our usecases, needs and approaches about
using NVDIMM as a NUMA node.
We would like to provide NVDIMM to our cloud customers as some low cost
memory. Virtual machines could run with NVDIMM as backed memory. Then
we would like the below needs are met:
* The ratio of DRAM vs NVDIMM is configurable per process, or even
per VMA
* The user VMs alway get DRAM first as long as the ratio is not
reached
* Migrate cold data to NVDIMM and keep hot data in DRAM dynamically
and throughout the life time of VMs
To meet the needs we did some in-house implementation:
* Provide madvise interface to configure the ratio
* Put NVDIMM into a separate zonelist so that default allocation
can't touch it as long as it is requested explicitly
* A kernel thread scans cold pages
We tried to just use current NUMA APIs, but we realized they can't meet
our needs. For example, if we configure a VMA use 50% DRAM and 50%
NVDIMM, mbind() could set preferred node policy (DRAM node or NVDIMM
node) for this VMA, but it can't control how much DRAM or NVDIMM is used
by this specific VMA to satisfy the ratio.
So, IMHO we definitely need more fine-grained APIs to control the NUMA
behavior.
I'd like also to discuss about this topic with:
Dave Hansen
Dan Williams
Fengguang Wu
Other than the above topic, I'd also like to meet other MM developers to
discuss about some our usecases about memory cgroup (hallway
conversation may be good enough). I had submitted some RFC patches to
the mailing list and they did incur some discussion, but we have not
reached solid conclusion yet.
https://lore.kernel.org/lkml/1547061285-100329-1-git-send-email-yang.shi@linux.alibaba.com/
Thanks,
Yang
Powered by blists - more mailing lists