lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d123d288-4215-4a8c-9689-bbfe24c24b08@redhat.com>
Date: Tue, 1 Oct 2024 13:30:19 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Yunsheng Lin <linyunsheng@...wei.com>, davem@...emloft.net,
 kuba@...nel.org
Cc: liuyonglong@...wei.com, fanghaiqing@...wei.com, zhangkun09@...wei.com,
 Alexander Lobakin <aleksander.lobakin@...el.com>,
 Jesper Dangaard Brouer <hawk@...nel.org>,
 Ilias Apalodimas <ilias.apalodimas@...aro.org>,
 Eric Dumazet <edumazet@...gle.com>, netdev@...r.kernel.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH net v2 1/2] page_pool: fix timing for checking and
 disabling napi_local

On 9/25/24 09:57, Yunsheng Lin wrote:
> page_pool page may be freed from skb_defer_free_flush() to
> softirq context, it may cause concurrent access problem for
> pool->alloc cache due to the below time window, as below,
> both CPU0 and CPU1 may access the pool->alloc cache
> concurrently in page_pool_empty_alloc_cache_once() and
> page_pool_recycle_in_cache():
> 
>            CPU 0                           CPU1
>      page_pool_destroy()          skb_defer_free_flush()
>             .                               .
>             .                   page_pool_put_unrefed_page()
>             .                               .
>             .               allow_direct = page_pool_napi_local()
>             .                               .
> page_pool_disable_direct_recycling()       .
>             .                               .
> page_pool_empty_alloc_cache_once() page_pool_recycle_in_cache()
> 
> Use rcu mechanism to avoid the above concurrent access problem.
> 
> Note, the above was found during code reviewing on how to fix
> the problem in [1].
> 
> 1. https://lore.kernel.org/lkml/8067f204-1380-4d37-8ffd-007fc6f26738@kernel.org/T/
> 
> Fixes: dd64b232deb8 ("page_pool: unlink from napi during destroy")
> Signed-off-by: Yunsheng Lin <linyunsheng@...wei.com>
> CC: Alexander Lobakin <aleksander.lobakin@...el.com>
> ---
>   net/core/page_pool.c | 31 ++++++++++++++++++++++++++++---
>   1 file changed, 28 insertions(+), 3 deletions(-)
> 
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index a813d30d2135..bec6e717cd22 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -818,8 +818,17 @@ static bool page_pool_napi_local(const struct page_pool *pool)
>   void page_pool_put_unrefed_netmem(struct page_pool *pool, netmem_ref netmem,
>   				  unsigned int dma_sync_size, bool allow_direct)
>   {
> -	if (!allow_direct)
> +	bool allow_direct_orig = allow_direct;
> +
> +	/* page_pool_put_unrefed_netmem() is not supposed to be called with
> +	 * allow_direct being true after page_pool_destroy() is called, so
> +	 * the allow_direct being true case doesn't need synchronization.
> +	 */
> +	DEBUG_NET_WARN_ON_ONCE(allow_direct && pool->destroy_cnt);
> +	if (!allow_direct_orig) {
> +		rcu_read_lock();
>   		allow_direct = page_pool_napi_local(pool);
> +	}
>   
>   	netmem =
>   		__page_pool_put_page(pool, netmem, dma_sync_size, allow_direct);
> @@ -828,6 +837,9 @@ void page_pool_put_unrefed_netmem(struct page_pool *pool, netmem_ref netmem,
>   		recycle_stat_inc(pool, ring_full);
>   		page_pool_return_page(pool, netmem);
>   	}
> +
> +	if (!allow_direct_orig)
> +		rcu_read_unlock();

What about always acquiring the rcu lock? would that impact performances 
negatively?

If not, I think it's preferable, as it would make static checker happy.

>   }
>   EXPORT_SYMBOL(page_pool_put_unrefed_netmem);
>   

[...]

> @@ -1121,6 +1140,12 @@ void page_pool_destroy(struct page_pool *pool)
>   		return;
>   
>   	page_pool_disable_direct_recycling(pool);
> +
> +	/* Wait for the freeing side see the disabling direct recycling setting
> +	 * to avoid the concurrent access to the pool->alloc cache.
> +	 */
> +	synchronize_rcu();

When turning on/off a device with a lot of queues, the above could 
introduce a lot of long waits under the RTNL lock, right?

What about moving the trailing of this function in a separate helper and 
use call_rcu() instead?

Thanks!

Paolo


> +
>   	page_pool_free_frag(pool);
>   
>   	if (!page_pool_release(pool))


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ