[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d0c4a6b8cb669d9321173c4d4ce0062b6f7698d5.camel@gmail.com>
Date: Thu, 22 Apr 2021 03:50:44 -0300
From: Leonardo Bras <leobras.c@...il.com>
To: Michael Ellerman <mpe@...erman.id.au>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Alexey Kardashevskiy <aik@...abs.ru>,
Niklas Schnelle <schnelle@...ux.ibm.com>,
Nicolin Chen <nicoleotsuka@...il.com>
Cc: linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] powerpc/kernel/iommu: Use largepool as a last
resort when !largealloc
Hello,
FYI: This patch was reviewed when it was part of another patchset:
http://patchwork.ozlabs.org/project/linuxppc-dev/patch/20200817234033.442511-4-leobras.c@gmail.com/
On Thu, 2021-03-18 at 14:44 -0300, Leonardo Bras wrote:
> As of today, doing iommu_range_alloc() only for !largealloc (npages <= 15)
> will only be able to use 3/4 of the available pages, given pages on
> largepool not being available for !largealloc.
>
> This could mean some drivers not being able to fully use all the available
> pages for the DMA window.
>
> Add pages on largepool as a last resort for !largealloc, making all pages
> of the DMA window available.
>
> Signed-off-by: Leonardo Bras <leobras.c@...il.com>
> Reviewed-by: Alexey Kardashevskiy <aik@...abs.ru>
> ---
> arch/powerpc/kernel/iommu.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
> index 3329ef045805..ae6ad8dca605 100644
> --- a/arch/powerpc/kernel/iommu.c
> +++ b/arch/powerpc/kernel/iommu.c
> @@ -255,6 +255,15 @@ static unsigned long iommu_range_alloc(struct device *dev,
> pass++;
> goto again;
>
>
> + } else if (pass == tbl->nr_pools + 1) {
> + /* Last resort: try largepool */
> + spin_unlock(&pool->lock);
> + pool = &tbl->large_pool;
> + spin_lock(&pool->lock);
> + pool->hint = pool->start;
> + pass++;
> + goto again;
> +
> } else {
> /* Give up */
> spin_unlock_irqrestore(&(pool->lock), flags);
Powered by blists - more mailing lists