[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAC_iWjLXL_FbCsCvuhqhz_i8j_yOgAs3gn2DAVktV0aZqT3QYg@mail.gmail.com>
Date: Tue, 18 Apr 2023 12:32:47 +0300
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, netdev@...r.kernel.org, edumazet@...gle.com,
pabeni@...hat.com, michael.chan@...adcom.com, hawk@...nel.org
Subject: Re: [PATCH net-next] page_pool: add DMA_ATTR_WEAK_ORDERING on all mappings
This seems sane to me, especially since we use page pool on the Rx
path for normal drivers. If anyone has a different opinion please
shout\
On Mon, 17 Apr 2023 at 18:28, Jakub Kicinski <kuba@...nel.org> wrote:
>
> Commit c519fe9a4f0d ("bnxt: add dma mapping attributes") added
> DMA_ATTR_WEAK_ORDERING to DMA attrs on bnxt. It has since spread
> to a few more drivers (possibly as a copy'n'paste).
>
> DMA_ATTR_WEAK_ORDERING only seems to matter on Sparc and PowerPC/cell,
> the rarity of these platforms is likely why we never bothered adding
> the attribute in the page pool, even though it should be safe to add.
>
> To make the page pool migration in drivers which set this flag less
> of a risk (of regressing the precious sparc database workloads or
> whatever needed this) let's add DMA_ATTR_WEAK_ORDERING on all
> page pool DMA mappings.
>
> We could make this a driver opt-in but frankly I don't think it's
> worth complicating the API. I can't think of a reason why device
> accesses to packet memory would have to be ordered.
>
> Signed-off-by: Jakub Kicinski <kuba@...nel.org>
> ---
> CC: hawk@...nel.org
> CC: ilias.apalodimas@...aro.org
> ---
> net/core/page_pool.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 2f6bf422ed30..97f20f7ff4fc 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -316,7 +316,8 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
> */
> dma = dma_map_page_attrs(pool->p.dev, page, 0,
> (PAGE_SIZE << pool->p.order),
> - pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
> + pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC |
> + DMA_ATTR_WEAK_ORDERING);
> if (dma_mapping_error(pool->p.dev, dma))
> return false;
>
> @@ -484,7 +485,7 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
> /* When page is unmapped, it cannot be returned to our pool */
> dma_unmap_page_attrs(pool->p.dev, dma,
> PAGE_SIZE << pool->p.order, pool->p.dma_dir,
> - DMA_ATTR_SKIP_CPU_SYNC);
> + DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
> page_pool_set_dma_addr(page, 0);
> skip_dma_unmap:
> page_pool_clear_pp_info(page);
> --
> 2.39.2
>
Acked-by: Ilias Apalodimas <ilias.apalodimas@...aro.org>
Powered by blists - more mailing lists