[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a2e900b0-1b89-4e88-a6d4-8c0e6de50f52@amd.com>
Date: Thu, 28 Aug 2025 16:21:54 -0700
From: "Koralahalli Channabasappa, Smita"
<Smita.KoralahalliChannabasappa@....com>
To: "Zhijian Li (Fujitsu)" <lizhijian@...itsu.com>,
Alison Schofield <alison.schofield@...el.com>
Cc: "dan.j.williams@...el.com" <dan.j.williams@...el.com>,
"linux-cxl@...r.kernel.org" <linux-cxl@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"nvdimm@...ts.linux.dev" <nvdimm@...ts.linux.dev>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
Davidlohr Bueso <dave@...olabs.net>,
Jonathan Cameron <jonathan.cameron@...wei.com>,
Dave Jiang <dave.jiang@...el.com>, Vishal Verma <vishal.l.verma@...el.com>,
Ira Weiny <ira.weiny@...el.com>, Matthew Wilcox <willy@...radead.org>,
Jan Kara <jack@...e.cz>, "Rafael J . Wysocki" <rafael@...nel.org>,
Len Brown <len.brown@...el.com>, Pavel Machek <pavel@...nel.org>,
Li Ming <ming.li@...omail.com>, Jeff Johnson
<jeff.johnson@....qualcomm.com>, Ying Huang <huang.ying.caritas@...il.com>,
"Xingtao Yao (Fujitsu)" <yaoxt.fnst@...itsu.com>,
Peter Zijlstra <peterz@...radead.org>, Greg KH <gregkh@...uxfoundation.org>,
Nathan Fontenot <nathan.fontenot@....com>,
Terry Bowman <terry.bowman@....com>, Robert Richter <rrichter@....com>,
Benjamin Cheatham <benjamin.cheatham@....com>,
PradeepVineshReddy Kodamati <PradeepVineshReddy.Kodamati@....com>,
"Yasunori Gotou (Fujitsu)" <y-goto@...itsu.com>
Subject: Re: [PATCH v5 3/7] cxl/acpi: Add background worker to coordinate with
cxl_mem probe completion
Hi Zhijian,
On 8/26/2025 11:30 PM, Zhijian Li (Fujitsu) wrote:
> All,
>
>
> I have confirmed that in the !CXL_REGION configuration, the same environment may fail to fall back to hmem.(Your new patch cannot resolve this issue)
>
> In my environment:
> - There are two CXL memory devices corresponding to:
> ```
> 5d0000000-6cffffff : CXL Window 0
> 6d0000000-7cffffff : CXL Window 1
> ```
> - E820 table contains a 'soft reserved' entry:
> ```
> [ 0.000000] BIOS-e820: [mem 0x00000005d0000000-0x00000007cfffffff] soft reserved
> ```
>
> However, since my ACPI SRAT doesn't describe the CXL memory devices (the point), `acpi/hmat.c` won't allocate memory targets for them. This prevents the call chain:
> ```c
> hmat_register_target_devices() // for each SRAT-described target
> -> hmem_register_resource()
> -> insert entry into "HMEM devices" resource
> ```
>
> Therefore, for successful fallback to hmem in this environment: `dax_hmem.ko` and `kmem.ko` must request resources BEFORE `cxl_acpi.ko` inserts 'CXL Window X'
>
> However the kernel cannot guarantee this initialization order.
>
> When cxl_acpi runs before dax_kmem/kmem:
> ```
> (built-in) CXL_REGION=n
> driver/dax/hmem/device.c cxl_acpi.ko dax_hmem.ko kmem.ko
>
> (1) Add entry '15d0000000-7cfffffff'
> (2) Traverse "HMEM devices"
> Insert to iomem:
> 5d0000000-7cffffff : Soft Reserved
>
> (3) Insert CXL Window 0/1
> /proc/iomem shows:
> 5d0000000-7cffffff : Soft Reserved
> 5d0000000-6cffffff : CXL Window 0
> 6d0000000-7cffffff : CXL Window 1
>
> (4) Create dax device
> (5) request_mem_region() fails
> for 5d0000000-7cffffff
> Reason: Children of 'Soft Reserved'
> (CXL Windows 0/1) don't cover full range
> ```
>
Thanks for confirming the failure point. I was thinking of two possible
ways forward here, and I would like to get feedback from others:
[1] Teach dax_hmem to split when the parent claim fails:
If __request_region() fails for the top-level Soft Reserved range
because IORES_DESC_CXL children already exist, dax_hmem could iterate
those windows and register each one individually. The downside is that
it adds some complexity and feels a bit like papering over the fact that
CXL should eventually own all of this memory. As Dan mentioned, the
long-term plan is for Linux to not need the soft-reserve fallback at
all, and simply ignore Soft Reserve for CXL Windows because the CXL
subsystem will handle it.
[2] Always unconditionally load CXL early..
Call request_module("cxl_acpi"); request_module("cxl_pci"); from
dax_hmem_init() (without the IS_ENABLED(CONFIG_DEV_DAX_CXL) guard). If
those are y/m, they’ll be present; if n, it’s a no-op. Then in
hmem_register_device() drop the IS_ENABLED(CONFIG_DEV_DAX_CXL) gate and do:
if (region_intersects(res->start, resource_size(res),
IORESOURCE_MEM, IORES_DESC_CXL) !=REGION_DISJOINT)
/* defer to CXL */;
and defer to CXL if windows are present. This makes Soft Reserved
unavailable once CXL Windows have been discovered, even if CXL_REGION is
disabled. That aligns better with the idea that “CXL should win”
whenever a window is visible (This also needs to be considered alongside
patch 6/6 in my series.)
With CXL_REGION=n there would be no devdax and no kmem for that range;
proc/iomem would show only the windows something like below
850000000-284fffffff : CXL Window 0
2850000000-484fffffff : CXL Window 1
4850000000-684fffffff : CXL Window 2
That means the memory is left unclaimed/unavailable.. (no System RAM, no
/dev/dax). Is that acceptable when CXL_REGION is disabled?
Thanks
Smita
> ---------------------
> In my another environment where ACPI SRAT has separate entries per CXL device:
> 1. `acpi/hmat.c` inserts two entries into "HMEM devices":
> - 5d0000000-6cffffff
> - 6d0000000-7cffffff
>
> 2. Regardless of module order, dax/kmem requests per-device resources, resulting in:
> ```
> 5d0000000-7cffffff : Soft Reserved
> 5d0000000-6cffffff : CXL Window 0
> 5d0000000-6cffffff : dax0.0
> 5d0000000-6cffffff : System RAM (kmem)
> 6d0000000-7cffffff : CXL Window 1
> 6d0000000-7cffffff : dax1.0
> 6d0000000-7cffffff : System RAM (kmem)
> ```
>
> Thanks,
> Zhijian
>
>
> On 25/08/2025 15:50, Li Zhijian wrote:
>>
>>
>> On 22/08/2025 11:56, Koralahalli Channabasappa, Smita wrote:
>>>>
>>>>>
>>>>>> ```
>>>>>>
>>>>>> 3. When CXL_REGION is disabled, there is a failure to fallback to dax_hmem, in which case only CXL Window X is visible.
>>>>>
>>>>> Haven't tested !CXL_REGION yet.
>>>
>>> When CXL_REGION is disabled, DEV_DAX_CXL will also be disabled. So dax_hmem should handle it.
>>
>> Yes, falling back to dax_hmem/kmem is the result we expect.
>> I haven't figured out the root cause of the issue yet, but I can tell you that in my QEMU environment,
>> there is currently a certain probability that it cannot fall back to dax_hmem/kmem.
>>
>> Upon its failure, I observed the following warnings and errors (with my local fixup kernel).
>> [ 12.203254] kmem dax0.0: mapping0: 0x5d0000000-0x7cfffffff could not reserve region
>> [ 12.203437] kmem dax0.0: probe with driver kmem failed with error -16
>>
>>
>>
>>> I was able to fallback to dax_hmem. But let me know if I'm missing something.
>>>
>>> config DEV_DAX_CXL
>>> tristate "CXL DAX: direct access to CXL RAM regions"
>>> depends on CXL_BUS && CXL_REGION && DEV_DAX
>>> ..
>>>
>>>>>
>>>>>> On failure:
>>>>>> ```
>>>>>> 100000000-27ffffff : System RAM
>>>>>> 5c0001128-5c00011b7 : port1
>>>>>> 5c0011128-5c00111b7 : port2
>>>>>> 5d0000000-6cffffff : CXL Window 0
>>>>>> 6d0000000-7cffffff : CXL Window 1
>>>>>> 7000000000-700000ffff : PCI Bus 0000:0c
>>>>>> 7000000000-700000ffff : 0000:0c:00.0
>>>>>> 7000001080-70000010d7 : mem1
>>>>>> ```
>>>>>>
>>>>>> On success:
>>>>>> ```
>>>>>> 5d0000000-7cffffff : dax0.0
>>>>>> 5d0000000-7cffffff : System RAM (kmem)
>>>>>> 5d0000000-6cffffff : CXL Window 0
>>>>>> 6d0000000-7cffffff : CXL Window 1
>>>>>> ```
Powered by blists - more mailing lists