[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ee0539e9-e123-e871-dae5-30d09e010c76@linux.ibm.com>
Date: Tue, 5 Jul 2022 09:47:58 +0530
From: Aneesh Kumar K V <aneesh.kumar@...ux.ibm.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: linux-mm@...ck.org, akpm@...ux-foundation.org,
Wei Xu <weixugc@...gle.com>, Huang Ying <ying.huang@...el.com>,
Yang Shi <shy828301@...il.com>,
Davidlohr Bueso <dave@...olabs.net>,
Tim C Chen <tim.c.chen@...el.com>,
Michal Hocko <mhocko@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Hesham Almatary <hesham.almatary@...wei.com>,
Dave Hansen <dave.hansen@...el.com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Alistair Popple <apopple@...dia.com>,
Dan Williams <dan.j.williams@...el.com>,
Johannes Weiner <hannes@...xchg.org>, jvgediya.oss@...il.com
Subject: Re: [PATCH v8 00/12] mm/demotion: Memory tiers and demotion
On 7/4/22 8:30 PM, Matthew Wilcox wrote:
> On Mon, Jul 04, 2022 at 12:36:00PM +0530, Aneesh Kumar K.V wrote:
>> * The current tier initialization code always initializes
>> each memory-only NUMA node into a lower tier. But a memory-only
>> NUMA node may have a high performance memory device (e.g. a DRAM
>> device attached via CXL.mem or a DRAM-backed memory-only node on
>> a virtual machine) and should be put into a higher tier.
>>
>> * The current tier hierarchy always puts CPU nodes into the top
>> tier. But on a system with HBM (e.g. GPU memory) devices, these
>> memory-only HBM NUMA nodes should be in the top tier, and DRAM nodes
>> with CPUs are better to be placed into the next lower tier.
>
> These things that you identify as problems seem perfectly sensible to me.
> Memory which is attached to this CPU has the lowest latency and should
> be preferred over more remote memory, no matter its bandwidth.
Allocation will prefer local memory over remote memory. Memory tiers are used during
demotion and currently, the kernel demotes cold pages from DRAM memory to these
special device memories because they appear as memory-only NUMA nodes. In many cases
(ex: GPU) what is desired is the demotion of cold pages from GPU memory to DRAM or
even slow memory.
This patchset builds a framework to enable such demotion criteria.
-aneesh
Powered by blists - more mailing lists