lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ec42f238-8fc7-2ea4-c1a7-e4c3c4b8f512@redhat.com>
Date:   Fri, 3 Feb 2023 12:15:38 +0100
From:   Jesper Dangaard Brouer <jbrouer@...hat.com>
To:     Qingfang DENG <dqfext@...il.com>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        Ilias Apalodimas <ilias.apalodimas@...aro.org>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Lorenzo Bianconi <lorenzo@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        John Fastabend <john.fastabend@...il.com>,
        netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Cc:     brouer@...hat.com
Subject: Re: [PATCH net] net: page_pool: use in_softirq() instead


On 02/02/2023 03.44, Qingfang DENG wrote:
> From: Qingfang DENG <qingfang.deng@...lower.com.cn>
> 
> We use BH context only for synchronization, so we don't care if it's
> actually serving softirq or not.
> 

Are you sure this is safe?
(also see my inline notes below)

> As a side node, in case of threaded NAPI, in_serving_softirq() will
> return false because it's in process context with BH off, making
> page_pool_recycle_in_cache() unreachable.

How can I enable threaded NAPI on my system?

> Signed-off-by: Qingfang DENG <qingfang.deng@...lower.com.cn>
> Fixes: 7886244736a4 ("net: page_pool: Add bulk support for ptr_ring")
> Fixes: ff7d6b27f894 ("page_pool: refurbish version of page_pool code")
> ---
>   include/net/page_pool.h | 4 ++--
>   net/core/page_pool.c    | 6 +++---
>   2 files changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> index 813c93499f20..34bf531ffc8d 100644
> --- a/include/net/page_pool.h
> +++ b/include/net/page_pool.h
> @@ -386,7 +386,7 @@ static inline void page_pool_nid_changed(struct page_pool *pool, int new_nid)
>   static inline void page_pool_ring_lock(struct page_pool *pool)
>   	__acquires(&pool->ring.producer_lock)
>   {
> -	if (in_serving_softirq())
> +	if (in_softirq())
>   		spin_lock(&pool->ring.producer_lock);
>   	else
>   		spin_lock_bh(&pool->ring.producer_lock);
> @@ -395,7 +395,7 @@ static inline void page_pool_ring_lock(struct page_pool *pool)
>   static inline void page_pool_ring_unlock(struct page_pool *pool)
>   	__releases(&pool->ring.producer_lock)
>   {
> -	if (in_serving_softirq())
> +	if (in_softirq())
>   		spin_unlock(&pool->ring.producer_lock);
>   	else
>   		spin_unlock_bh(&pool->ring.producer_lock);
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 9b203d8660e4..193c18799865 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -511,8 +511,8 @@ static void page_pool_return_page(struct page_pool *pool, struct page *page)
>   static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page)
>   {
>   	int ret;
> -	/* BH protection not needed if current is serving softirq */
> -	if (in_serving_softirq())
> +	/* BH protection not needed if current is softirq */
> +	if (in_softirq())
>   		ret = ptr_ring_produce(&pool->ring, page);
>   	else
>   		ret = ptr_ring_produce_bh(&pool->ring, page);
> @@ -570,7 +570,7 @@ __page_pool_put_page(struct page_pool *pool, struct page *page,
>   			page_pool_dma_sync_for_device(pool, page,
>   						      dma_sync_size);
>   
> -		if (allow_direct && in_serving_softirq() &&
> +		if (allow_direct && in_softirq() &&
>   		    page_pool_recycle_in_cache(page, pool))

I think other cases (above) are likely safe, but I worry a little about
this case, as the page_pool_recycle_in_cache() rely on RX-NAPI protection.
Meaning it is only the CPU that handles RX-NAPI for this RX-queue that
is allowed to access this lockless array.

We do have the 'allow_direct' boolean, and if every driver/user uses
this correctly, then this should be safe.  Changing this makes it
possible for drivers to use page_pool API incorrectly and this leads to
hard-to-debug errors.

>   			return NULL;
>   

--Jesper

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ