[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190328224549.GA11100@localhost.localdomain>
Date: Thu, 28 Mar 2019 16:45:50 -0600
From: Keith Busch <kbusch@...nel.org>
To: Yang Shi <yang.shi@...ux.alibaba.com>
Cc: "mhocko@...e.com" <mhocko@...e.com>,
"mgorman@...hsingularity.net" <mgorman@...hsingularity.net>,
"riel@...riel.com" <riel@...riel.com>,
"hannes@...xchg.org" <hannes@...xchg.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"Hansen, Dave" <dave.hansen@...el.com>,
"Busch, Keith" <keith.busch@...el.com>,
"Williams, Dan J" <dan.j.williams@...el.com>,
"Wu, Fengguang" <fengguang.wu@...el.com>,
"Du, Fan" <fan.du@...el.com>, "Huang, Ying" <ying.huang@...el.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 06/10] mm: vmscan: demote anon DRAM pages to PMEM node
On Thu, Mar 28, 2019 at 02:59:30PM -0700, Yang Shi wrote:
> Yes, it still could fail. I can't tell which way is better for now. I
> just thought scanning another round then migrating should be still
> faster than swapping off the top of my head.
I think it depends on the relative capacities between your primary and
migration tiers and how it's used. Applications may allocate and pin
directly out of pmem if they wish, so it's not a dedicated fallback
memory space like swap.
Powered by blists - more mailing lists