[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dd62c21c-6ac5-4e30-b173-d1fa5dcf019f@linux.intel.com>
Date: Fri, 12 Dec 2025 10:45:24 +0800
From: Baolu Lu <baolu.lu@...ux.intel.com>
To: Mostafa Saleh <smostafa@...gle.com>, linux-mm@...ck.org,
iommu@...ts.linux.dev, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org
Cc: corbet@....net, joro@...tes.org, will@...nel.org, robin.murphy@....com,
akpm@...ux-foundation.org, vbabka@...e.cz, surenb@...gle.com,
mhocko@...e.com, jackmanb@...gle.com, hannes@...xchg.org, ziy@...dia.com,
david@...hat.com, lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com,
rppt@...nel.org, xiaqinxin@...wei.com, rdunlap@...radead.org
Subject: Re: [PATCH v4 3/4] iommu: debug-pagealloc: Track IOMMU pages
On 12/11/25 20:59, Mostafa Saleh wrote:
> Using the new calls, use an atomic refcount to track how many times
> a page is mapped in any of the IOMMUs.
>
> For unmap we need to use iova_to_phys() to get the physical address
> of the pages.
>
> We use the smallest supported page size as the granularity of tracking
> per domain.
> This is important as it is possible to map pages and unmap them with
> larger sizes (as in map_sg()) cases.
>
> Signed-off-by: Mostafa Saleh<smostafa@...gle.com>
> ---
> drivers/iommu/iommu-debug-pagealloc.c | 91 +++++++++++++++++++++++++++
> 1 file changed, 91 insertions(+)
>
> diff --git a/drivers/iommu/iommu-debug-pagealloc.c b/drivers/iommu/iommu-debug-pagealloc.c
> index 1d343421da98..4639cf9518e6 100644
> --- a/drivers/iommu/iommu-debug-pagealloc.c
> +++ b/drivers/iommu/iommu-debug-pagealloc.c
> @@ -29,19 +29,110 @@ struct page_ext_operations page_iommu_debug_ops = {
> .need = need_iommu_debug,
> };
>
> +static struct page_ext *get_iommu_page_ext(phys_addr_t phys)
> +{
> + struct page *page = phys_to_page(phys);
> + struct page_ext *page_ext = page_ext_get(page);
> +
> + return page_ext;
> +}
> +
> +static struct iommu_debug_metadata *get_iommu_data(struct page_ext *page_ext)
> +{
> + return page_ext_data(page_ext, &page_iommu_debug_ops);
> +}
> +
> +static void iommu_debug_inc_page(phys_addr_t phys)
> +{
> + struct page_ext *page_ext = get_iommu_page_ext(phys);
> + struct iommu_debug_metadata *d = get_iommu_data(page_ext);
> +
> + WARN_ON(atomic_inc_return_relaxed(&d->ref) <= 0);
> + page_ext_put(page_ext);
> +}
> +
> +static void iommu_debug_dec_page(phys_addr_t phys)
> +{
> + struct page_ext *page_ext = get_iommu_page_ext(phys);
> + struct iommu_debug_metadata *d = get_iommu_data(page_ext);
> +
> + WARN_ON(atomic_dec_return_relaxed(&d->ref) < 0);
> + page_ext_put(page_ext);
> +}
> +
> +/*
> + * IOMMU page size doesn't have tomatch the CPU page size. So, we use
s/have tomatch/have to match/
> + * the smallest IOMMU page size to refcount the pages in the vmemmap.
> + * That is important as both map and unmap has to use the same page size
> + * to update the refcount to avoid double counting the same page.
> + * And as we can't know from iommu_unmap() what was the original page size
> + * used for map, we just use the minimum supported one for both.
> + */
> +static size_t iommu_debug_page_size(struct iommu_domain *domain)
> +{
> + return 1UL << __ffs(domain->pgsize_bitmap);
> +}
The changes look good to me,
Reviewed-by: Lu Baolu <baolu.lu@...ux.intel.com>
Powered by blists - more mailing lists