lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 19 Apr 2023 15:49:17 +0200
From:   Alexander Lobakin <aleksander.lobakin@...el.com>
To:     Jakub Kicinski <kuba@...nel.org>
CC:     <davem@...emloft.net>, <netdev@...r.kernel.org>,
        <edumazet@...gle.com>, <pabeni@...hat.com>,
        <michael.chan@...adcom.com>, <hawk@...nel.org>,
        <ilias.apalodimas@...aro.org>
Subject: Re: [PATCH net-next] page_pool: add DMA_ATTR_WEAK_ORDERING on all
 mappings

From: Jakub Kicinski <kuba@...nel.org>
Date: Mon, 17 Apr 2023 08:28:05 -0700

> Commit c519fe9a4f0d ("bnxt: add dma mapping attributes") added
> DMA_ATTR_WEAK_ORDERING to DMA attrs on bnxt. It has since spread
> to a few more drivers (possibly as a copy'n'paste).
> 
> DMA_ATTR_WEAK_ORDERING only seems to matter on Sparc and PowerPC/cell,
> the rarity of these platforms is likely why we never bothered adding
> the attribute in the page pool, even though it should be safe to add.
> 
> To make the page pool migration in drivers which set this flag less
> of a risk (of regressing the precious sparc database workloads or
> whatever needed this) let's add DMA_ATTR_WEAK_ORDERING on all
> page pool DMA mappings.
> 
> We could make this a driver opt-in but frankly I don't think it's
> worth complicating the API. I can't think of a reason why device
> accesses to packet memory would have to be ordered.

And you do that 2 days before I send this: [0] :D
Just jokin' :p

Unconditional weak ordering seems reasonable to me as well. BTW, I've
seen one driver converted to PP already, which manages DMA mappings on
its own *solely* because it needs to map stuff with weak ordering =\
It could be switched to PP builtin map handling thanks to this change
(-100 locs).

Reviewed-by: Alexander Lobakin <aleksander.lobakin@...el.com>

> 
> Signed-off-by: Jakub Kicinski <kuba@...nel.org>
> ---
> CC: hawk@...nel.org
> CC: ilias.apalodimas@...aro.org
> ---
>  net/core/page_pool.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 2f6bf422ed30..97f20f7ff4fc 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -316,7 +316,8 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
>  	 */
>  	dma = dma_map_page_attrs(pool->p.dev, page, 0,
>  				 (PAGE_SIZE << pool->p.order),
> -				 pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
> +				 pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC |
> +						  DMA_ATTR_WEAK_ORDERING);
>  	if (dma_mapping_error(pool->p.dev, dma))
>  		return false;
>  
> @@ -484,7 +485,7 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
>  	/* When page is unmapped, it cannot be returned to our pool */
>  	dma_unmap_page_attrs(pool->p.dev, dma,
>  			     PAGE_SIZE << pool->p.order, pool->p.dma_dir,
> -			     DMA_ATTR_SKIP_CPU_SYNC);
> +			     DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
>  	page_pool_set_dma_addr(page, 0);
>  skip_dma_unmap:
>  	page_pool_clear_pp_info(page);

[0]
https://github.com/alobakin/linux/commit/9d3d9ab4e7765d5a4c466c65ad6cf204a3ad1a71

Thanks,
Olek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ