[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190327003541.GE4328@localhost.localdomain>
Date: Tue, 26 Mar 2019 18:35:41 -0600
From: Keith Busch <kbusch@...nel.org>
To: Yang Shi <yang.shi@...ux.alibaba.com>
Cc: mhocko@...e.com, mgorman@...hsingularity.net, riel@...riel.com,
hannes@...xchg.org, akpm@...ux-foundation.org,
dave.hansen@...el.com, keith.busch@...el.com,
dan.j.williams@...el.com, fengguang.wu@...el.com, fan.du@...el.com,
ying.huang@...el.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 06/10] mm: vmscan: demote anon DRAM pages to PMEM node
On Mon, Mar 25, 2019 at 12:49:21PM -0700, Yang Shi wrote:
> On 3/24/19 3:20 PM, Keith Busch wrote:
> > How do these pages eventually get to swap when migration fails? Looks
> > like that's skipped.
>
> Yes, they will be just put back to LRU. Actually, I don't expect it would be
> very often to have migration fail at this stage (but I have no test data to
> support this hypothesis) since the pages have been isolated from LRU, so
> other reclaim path should not find them anymore.
>
> If it is locked by someone else right before migration, it is likely
> referenced again, so putting back to LRU sounds not bad.
>
> A potential improvement is to have sync migration for kswapd.
Well, it's not that migration fails only if the page is recently
referenced. Migration would fail if there isn't available memory in
the migration node, so this implementation carries an expectation that
migration nodes have higher free capacity than source nodes. And since
your attempting THP's without ever splitting them, that also requires
lower fragmentation for a successful migration.
Applications, however, may allocate and pin pages directly out of that
migration node to the point it does not have so much free capacity or
physical continuity, so we probably shouldn't assume it's the only way
to reclaim pages.
Powered by blists - more mailing lists