[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240523163630.24992c28@collabora.com>
Date: Thu, 23 May 2024 16:36:30 +0200
From: Boris Brezillon <boris.brezillon@...labora.com>
To: Robin Murphy <robin.murphy@....com>
Cc: Ashish Mhetre <amhetre@...dia.com>, will@...nel.org, joro@...tes.org,
linux-arm-kernel@...ts.infradead.org, Rob Clark <robdclark@...il.com>,
vdumpa@...dia.com, linux-tegra@...r.kernel.org, treding@...dia.com,
jonathanh@...dia.com, iommu@...ts.linux.dev, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] iommu: Optimize IOMMU UnMap
On Thu, 23 May 2024 14:41:12 +0100
Robin Murphy <robin.murphy@....com> wrote:
> On 23/05/2024 4:19 am, Ashish Mhetre wrote:
> > The current __arm_lpae_unmap() function calls dma_sync() on individual
> > PTEs after clearing them. By updating the __arm_lpae_unmap() to call
> > dma_sync() once for all cleared PTEs, the overall performance can be
> > improved 25% for large buffer sizes.
> > Below is detailed analysis of average unmap latency(in us) with and
> > without this optimization obtained by running dma_map_benchmark for
> > different buffer sizes.
> >
> > Size Time W/O Time With % Improvement
> > Optimization Optimization
> > (us) (us)
> >
> > 4KB 3.0 3.1 -3.33
> > 1MB 250.3 187.9 24.93
>
> This seems highly suspect - the smallest possible block size is 2MB so a
> 1MB unmap should not be affected by this path at all.
>
> > 2MB 493.7 368.7 25.32
> > 4MB 974.7 723.4 25.78
>
> I'm guessing this is on Tegra with the workaround to force everything to
> PAGE_SIZE? In the normal case a 2MB unmap should be nominally *faster*
> than 4KB, since it would also be a single PTE, but with one fewer level
> of table to walk to reach it. The 25% figure is rather misleading if
> it's only a mitigation of an existing erratum workaround, and the actual
> impact on the majority of non-broken systems is unmeasured.
>
> (As an aside, I think that workaround itself is a bit broken, since at
> least on Tegra234 with Cortex-A78, PAGE_SIZE could be 16KB which MMU-500
> doesn't support.)
>
> > Signed-off-by: Ashish Mhetre <amhetre@...dia.com>
> > ---
> > drivers/iommu/io-pgtable-arm.c | 34 +++++++++++++++++++++++++---------
> > 1 file changed, 25 insertions(+), 9 deletions(-)
> >
> > diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
> > index 3d23b924cec1..94094b711cba 100644
> > --- a/drivers/iommu/io-pgtable-arm.c
> > +++ b/drivers/iommu/io-pgtable-arm.c
> > @@ -256,13 +256,15 @@ static void __arm_lpae_sync_pte(arm_lpae_iopte *ptep, int num_entries,
> > sizeof(*ptep) * num_entries, DMA_TO_DEVICE);
> > }
> >
> > -static void __arm_lpae_clear_pte(arm_lpae_iopte *ptep, struct io_pgtable_cfg *cfg)
> > +static void __arm_lpae_clear_pte(arm_lpae_iopte *ptep, struct io_pgtable_cfg *cfg, int num_entries)
> > {
> > + int i;
> >
> > - *ptep = 0;
> > + for (i = 0; i < num_entries; i++)
> > + ptep[i] = 0;
> >
> > if (!cfg->coherent_walk)
> > - __arm_lpae_sync_pte(ptep, 1, cfg);
> > + __arm_lpae_sync_pte(ptep, num_entries, cfg);
> > }
> >
> > static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
> > @@ -633,13 +635,25 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
> > if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) {
> > max_entries = ARM_LPAE_PTES_PER_TABLE(data) - unmap_idx_start;
> > num_entries = min_t(int, pgcount, max_entries);
> > -
> > - while (i < num_entries) {
> > - pte = READ_ONCE(*ptep);
> > + arm_lpae_iopte *pte_flush;
> > + int j = 0;
> > +
> > + pte_flush = kvcalloc(num_entries, sizeof(*pte_flush), GFP_ATOMIC);
>
> kvmalloc() with GFP_ATOMIC isn't valid. However, I'm not sure if there
> isn't a more fundamental problem here - Rob, Boris; was it just the map
> path, or would any allocation on unmap risk the GPU reclaim deadlock
> thing as well?
Unmap as well, because of the 'split huge page into small pages'
logic when the unmap region is not aligned on 2MB.
Powered by blists - more mailing lists