[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8ff09d53-1b74-2efb-98b2-ce10eaeffed9@intel.com>
Date: Wed, 27 Mar 2019 13:14:56 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Yang Shi <yang.shi@...ux.alibaba.com>,
Dan Williams <dan.j.williams@...el.com>,
Michal Hocko <mhocko@...nel.org>
Cc: Mel Gorman <mgorman@...hsingularity.net>,
Rik van Riel <riel@...riel.com>,
Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Keith Busch <keith.busch@...el.com>,
Fengguang Wu <fengguang.wu@...el.com>,
"Du, Fan" <fan.du@...el.com>, "Huang, Ying" <ying.huang@...el.com>,
Linux MM <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 0/10] Another Approach to Use PMEM as NUMA Node
On 3/27/19 11:59 AM, Yang Shi wrote:
> In real production environment we don't know what kind of applications
> would end up on PMEM (DRAM may be full, allocation fall back to PMEM)
> then have unexpected performance degradation. I understand to have
> mempolicy to choose to avoid it. But, there might be hundreds or
> thousands of applications running on the machine, it sounds not that
> feasible to me to have each single application set mempolicy to avoid it.
Maybe not manually, but it's entirely possible to automate this.
It would be trivial to get help from an orchestrator, or even systemd to
get apps launched with a particular policy. Or, even a *shell* that
launches apps to have a particular policy.
Powered by blists - more mailing lists