[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <960F3918-7D2C-463C-A911-9B62CD7E5D83@nvidia.com>
Date: Tue, 16 Apr 2019 11:33:48 -0400
From: Zi Yan <ziy@...dia.com>
To: Dave Hansen <dave.hansen@...el.com>
CC: Michal Hocko <mhocko@...nel.org>,
Yang Shi <yang.shi@...ux.alibaba.com>,
<mgorman@...hsingularity.net>, <riel@...riel.com>,
<hannes@...xchg.org>, <akpm@...ux-foundation.org>,
<keith.busch@...el.com>, <dan.j.williams@...el.com>,
<fengguang.wu@...el.com>, <fan.du@...el.com>,
<ying.huang@...el.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [v2 RFC PATCH 0/9] Another Approach to Use PMEM as NUMA Node
On 16 Apr 2019, at 10:30, Dave Hansen wrote:
> On 4/16/19 12:47 AM, Michal Hocko wrote:
>> You definitely have to follow policy. You cannot demote to a node which
>> is outside of the cpuset/mempolicy because you are breaking contract
>> expected by the userspace. That implies doing a rmap walk.
>
> What *is* the contract with userspace, anyway? :)
>
> Obviously, the preferred policy doesn't have any strict contract.
>
> The strict binding has a bit more of a contract, but it doesn't prevent
> swapping. Strict binding also doesn't keep another app from moving the
> memory.
>
> We have a reasonable argument that demotion is better than swapping.
> So, we could say that even if a VMA has a strict NUMA policy, demoting
> pages mapped there pages still beats swapping them or tossing the page
> cache. It's doing them a favor to demote them.
I just wonder whether page migration is always better than swapping,
since SSD write throughput keeps improving but page migration throughput
is still low. For example, my machine has a SSD with 2GB/s writing throughput
but the throughput of 4KB page migration is less than 1GB/s, why do we
want to use page migration for demotion instead of swapping?
--
Best Regards,
Yan Zi
Download attachment "signature.asc" of type "application/pgp-signature" (855 bytes)
Powered by blists - more mailing lists