[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875x8kbkaz.fsf@DESKTOP-5N7EMDA>
Date: Thu, 29 Jan 2026 21:49:40 +0800
From: "Huang, Ying" <ying.huang@...ux.alibaba.com>
To: Jordan Niethe <jniethe@...dia.com>
Cc: linux-mm@...ck.org, balbirs@...dia.com, matthew.brost@...el.com,
akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
dri-devel@...ts.freedesktop.org, david@...hat.com, ziy@...dia.com,
apopple@...dia.com, lorenzo.stoakes@...cle.com, lyude@...hat.com,
dakr@...nel.org, airlied@...il.com, simona@...ll.ch,
rcampbell@...dia.com, mpenttil@...hat.com, jgg@...dia.com,
willy@...radead.org, linuxppc-dev@...ts.ozlabs.org,
intel-xe@...ts.freedesktop.org, jgg@...pe.ca, Felix.Kuehling@....com,
jhubbard@...dia.com
Subject: Re: [PATCH v3 00/13] Remove device private pages from physical
address space
Hi, Jordan,
Jordan Niethe <jniethe@...dia.com> writes:
> Introduction
> ------------
>
> The existing design of device private memory imposes limitations which
> render it non functional for certain systems and configurations - this
> series removes those limitations. These issues are:
>
> 1) Limited available physical address space
> 2) Conflicts with arch64 mm implementation
>
> Limited available address space
> -------------------------------
>
> Device private memory is implemented by first reserving a region of the
> physical address space. This is a problem. The physical address space is
> not a resource that is directly under the kernel's control. Availability
> of suitable physical address space is constrained by the underlying
> hardware and firmware and may not always be available.
>
> Device private memory assumes that it will be able to reserve a device
> memory sized chunk of physical address space. However, there is nothing
> guaranteeing that this will succeed, and there a number of factors that
> increase the likelihood of failure. We need to consider what else may
> exist in the physical address space. It is observed that certain VM
> configurations place very large PCI windows immediately after RAM. Large
> enough that there is no physical address space available at all for
> device private memory. This is more likely to occur on 43 bit physical
> width systems which have less physical address space.
>
> The fundamental issue is the physical address space is not a resource
> the kernel can rely on being to allocate from at will.
>
> aarch64 issues
> --------------
>
> The current device private memory implementation has further issues on
> aarch64. On aarch64, vmemmap is sized to cover the ram only. Adding
> device private pages to the linear map then means that for device
> private page, pfn_to_page() will read beyond the end of vmemmap region
> leading to potential memory corruption. This means that device private
> memory does not work reliably on aarch64 [0].
>
> New implementation
> ------------------
>
> This series changes device private memory so that it does not require
> allocation of physical address space and these problems are avoided.
> Instead of using the physical address space, we introduce a "device
> private address space" and allocate from there.
>
> A consequence of placing the device private pages outside of the
> physical address space is that they no longer have a PFN. However, it is
> still necessary to be able to look up a corresponding device private
> page from a device private PTE entry, which means that we still require
> some way to index into this device private address space. Instead of a
> PFN, device private pages use an offset into this device private address
> space to look up device private struct pages.
>
> The problem that then needs to be addressed is how to avoid confusing
> these device private offsets with PFNs. It is the limited usage
> of the device private pages themselves which make this possible. A
> device private page is only used for userspace mappings, we do not need
> to be concerned with them being used within the mm more broadly. This
> means that the only way that the core kernel looks up these pages is via
> the page table, where their PTE already indicates if they refer to a
> device private page via their swap type, e.g. SWP_DEVICE_WRITE. We can
> use this information to determine if the PTE contains a PFN which should
> be looked up in the page map, or a device private offset which should be
> looked up elsewhere.
>
> This applies when we are creating PTE entries for device private pages -
> because they have their own type there are already must be handled
> separately, so it is a small step to convert them to a device private
> PFN now too.
>
> The first part of the series updates callers where device private
> offsets might now be encountered to track this extra state.
>
> The last patch contains the bulk of the work where we change how we
> convert between device private pages to device private offsets and then
> use a new interface for allocating device private pages without the need
> for reserving physical address space.
>
> By removing the device private pages from the physical address space,
> this series also opens up the possibility to moving away from tracking
> device private memory using struct pages in the future. This is
> desirable as on systems with large amounts of memory these device
> private struct pages use a signifiant amount of memory and take a
> significant amount of time to initialize.
Now device private pages are quite different from other pages, even in a
separate address pace. IMHO, it may be better to make that as explicit
as possible. For example, is it a good idea to put them in its own
zone, like ZONE_DEVICE_PRIVATE? It appears not natural to put pages
from different address spaces into one zone. And, this may make them
easier to be distinguished from other pages.
[snip]
---
Best Regards,
Huang, Ying
Powered by blists - more mailing lists