lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1ECE2357-DEBC-4E46-99CA-34BE894161CF@nvidia.com>
Date: Tue, 27 Jan 2026 16:12:00 -0500
From: Zi Yan <ziy@...dia.com>
To: Jordan Niethe <jniethe@...dia.com>
Cc: linux-mm@...ck.org, balbirs@...dia.com, matthew.brost@...el.com,
 akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
 dri-devel@...ts.freedesktop.org, david@...hat.com, apopple@...dia.com,
 lorenzo.stoakes@...cle.com, lyude@...hat.com, dakr@...nel.org,
 airlied@...il.com, simona@...ll.ch, rcampbell@...dia.com,
 mpenttil@...hat.com, jgg@...dia.com, willy@...radead.org,
 linuxppc-dev@...ts.ozlabs.org, intel-xe@...ts.freedesktop.org, jgg@...pe.ca,
 Felix.Kuehling@....com, jhubbard@...dia.com
Subject: Re: [PATCH v3 13/13] mm: Remove device private pages from the
 physical address space

On 23 Jan 2026, at 1:23, Jordan Niethe wrote:

> Currently when creating device private struct pages, the first step is
> to use request_free_mem_region() to get a range of physical address
> space large enough to represent the devices memory. This allocated
> physical address range is then remapped as device private memory using
> memremap_pages().
>
> Needing allocation of physical address space has some problems:
>
>   1) There may be insufficient physical address space to represent the
>      device memory. KASLR reducing the physical address space and VM
>      configurations with limited physical address space increase the
>      likelihood of hitting this especially as device memory increases. This
>      has been observed to prevent device private from being initialized.
>
>   2) Attempting to add the device private pages to the linear map at
>      addresses beyond the actual physical memory causes issues on
>      architectures like aarch64 meaning the feature does not work there.
>
> Instead of using the physical address space, introduce a device private
> address space and allocate devices regions from there to represent the
> device private pages.
>
> Introduce a new interface memremap_device_private_pagemap() that
> allocates a requested amount of device private address space and creates
> the necessary device private pages.
>
> To support this new interface, struct dev_pagemap needs some changes:
>
>   - Add a new dev_pagemap::nr_pages field as an input parameter.
>   - Add a new dev_pagemap::pages array to store the device
>     private pages.
>
> When using memremap_device_private_pagemap(), rather then passing in
> dev_pagemap::ranges[dev_pagemap::nr_ranges] of physical address space to
> be remapped, dev_pagemap::nr_ranges will always be 1, and the device
> private range that is reserved is returned in dev_pagemap::range.
>
> Forbid calling memremap_pages() with dev_pagemap::ranges::type =
> MEMORY_DEVICE_PRIVATE.
>
> Represent this device private address space using a new
> device_private_pgmap_tree maple tree. This tree maps a given device
> private address to a struct dev_pagemap, where a specific device private
> page may then be looked up in that dev_pagemap::pages array.
>
> Device private address space can be reclaimed and the assoicated device
> private pages freed using the corresponding new
> memunmap_device_private_pagemap() interface.
>
> Because the device private pages now live outside the physical address
> space, they no longer have a normal PFN. This means that page_to_pfn(),
> et al. are no longer meaningful.
>
> Introduce helpers:
>
>   - device_private_page_to_offset()
>   - device_private_folio_to_offset()
>
> to take a given device private page / folio and return its offset within
> the device private address space.
>
> Update the places where we previously converted a device private page to
> a PFN to use these new helpers. When we encounter a device private
> offset, instead of looking up its page within the pagemap use
> device_private_offset_to_page() instead.
>
> Update the existing users:
>
>  - lib/test_hmm.c
>  - ppc ultravisor
>  - drm/amd/amdkfd
>  - gpu/drm/xe
>  - gpu/drm/nouveau
>
> to use the new memremap_device_private_pagemap() interface.
>
> Signed-off-by: Jordan Niethe <jniethe@...dia.com>
> Signed-off-by: Alistair Popple <apopple@...dia.com>
>
> ---
> v1:
> - Include NUMA node paramater for memremap_device_private_pagemap()
> - Add devm_memremap_device_private_pagemap() and friends
> - Update existing users of memremap_pages():
>     - ppc ultravisor
>     - drm/amd/amdkfd
>     - gpu/drm/xe
>     - gpu/drm/nouveau
> - Update for HMM huge page support
> - Guard device_private_offset_to_page and friends with CONFIG_ZONE_DEVICE
>
> v2:
> - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges);
>
> v3:
> - Use numa_mem_id() if memremap_device_private_pagemap is called with
>   NUMA_NO_NODE. This fixes a null pointer deref in
>   lruvec_stat_mod_folio().
> - drm/xe: Remove call to devm_release_mem_region() in xe_pagemap_destroy_work()
> - s/VM_BUG/VM_WARN/
> ---

<snip>

>  include/linux/migrate.h                  |   6 +-
>  include/linux/mm.h                       |   2 +
>  include/linux/rmap.h                     |   5 +
>  include/linux/swapops.h                  |  10 +-


<snip>

>  mm/debug.c                               |   9 +-

<snip>

>  mm/mm_init.c                             |   8 +-
>  mm/page_vma_mapped.c                     |  22 ++-
>  mm/rmap.c                                |  43 +++--
>  mm/util.c                                |   5 +-
>  19 files changed, 396 insertions(+), 201 deletions(-)

The changes to the above files (core MM files) look good to me.

Some nit below:

<snip>

> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index 039a2d71e92f..e61a0e49a7c9 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -107,6 +107,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,
>  static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)
>  {
>  	unsigned long pfn;
> +	bool device_private = false;
>  	pte_t ptent = ptep_get(pvmw->pte);
>
>  	if (pvmw->flags & PVMW_MIGRATION) {
> @@ -115,6 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)
>  		if (!softleaf_is_migration(entry))
>  			return false;
>
> +		if (softleaf_is_migration_device_private(entry))
> +			device_private = true;
> +
>  		pfn = softleaf_to_pfn(entry);
>  	} else if (pte_present(ptent)) {
>  		pfn = pte_pfn(ptent);
> @@ -127,8 +131,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)
>  			return false;
>
>  		pfn = softleaf_to_pfn(entry);
> +
> +		if (softleaf_is_device_private(entry))
> +			device_private = true;
>  	}
>
> +	if ((device_private) ^ !!(pvmw->flags & PVMW_DEVICE_PRIVATE))

Would “device_private != !!(pvmw->flags & PVMW_DEVICE_PRIVATE)” be more
readable? Also I wonder if “!!” is needed here, since I remember modern
C can convert “pvmw->flags & PVMW_DEVICE_PRIVATE” to bool.

> +		return false;
> +
>  	if ((pfn + pte_nr - 1) < pvmw->pfn)
>  		return false;
>  	if (pfn > (pvmw->pfn + pvmw->nr_pages - 1))
> @@ -137,8 +147,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)
>  }
>
>  /* Returns true if the two ranges overlap.  Careful to not overflow. */
> -static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)
> +static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)
>  {
> +	if ((device_private) ^ !!(pvmw->flags & PVMW_DEVICE_PRIVATE))

Ditto.

Feel free to add:

Reviewed-by: Zi Yan <ziy@...dia.com> # for MM changes

Best Regards,
Yan, Zi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ