[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e1600e8d3986b1ed371847d4863628b8d7ad2091.camel@intel.com>
Date: Tue, 3 Oct 2023 23:48:32 +0000
From: "Verma, Vishal L" <vishal.l.verma@...el.com>
To: "aneesh.kumar@...ux.ibm.com" <aneesh.kumar@...ux.ibm.com>,
"Williams, Dan J" <dan.j.williams@...el.com>,
"Jiang, Dave" <dave.jiang@...el.com>,
"osalvador@...e.de" <osalvador@...e.de>,
"david@...hat.com" <david@...hat.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>
CC: "Hocko, Michal" <mhocko@...e.com>,
"Huang, Ying" <ying.huang@...el.com>,
"Jonathan.Cameron@...wei.com" <Jonathan.Cameron@...wei.com>,
"linux-cxl@...r.kernel.org" <linux-cxl@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"jmoyer@...hat.com" <jmoyer@...hat.com>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"nvdimm@...ts.linux.dev" <nvdimm@...ts.linux.dev>
Subject: Re: [PATCH v4 2/2] dax/kmem: allow kmem to add memory with
memmap_on_memory
On Tue, 2023-10-03 at 09:34 +0530, Aneesh Kumar K V wrote:
> On 9/29/23 2:00 AM, Vishal Verma wrote:
> > Large amounts of memory managed by the kmem driver may come in via CXL,
> > and it is often desirable to have the memmap for this memory on the new
> > memory itself.
> >
> > Enroll kmem-managed memory for memmap_on_memory semantics if the dax
> > region originates via CXL. For non-CXL dax regions, retain the existing
> > default behavior of hot adding without memmap_on_memory semantics.
> >
>
> Are we not looking at doing altmap space for CXL DAX regions? Last discussion around
> this was suggesting we look at doing this via altmap reservation so that
> we get contigous space for device memory enabling us to map them
> via 1G direct mapping entries?
>
Hey Aneesh - was this on a previous posting or something - do you have
a link so I can refresh myself on what the discussion was?
If it is enabling something in CXL similar to the --map=mem mode for
pmem + device dax, that could be incremental to this.
Powered by blists - more mailing lists