lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220406231512.GB96269@fastly.com>
Date:   Wed, 6 Apr 2022 16:15:13 -0700
From:   Joe Damato <jdamato@...tly.com>
To:     Lorenzo Bianconi <lorenzo@...nel.org>
Cc:     netdev@...r.kernel.org, lorenzo.bianconi@...hat.com,
        davem@...emloft.net, kuba@...nel.org, pabeni@...hat.com,
        jbrouer@...hat.com, ilias.apalodimas@...aro.org
Subject: Re: [PATCH net-next] page_pool: Add recycle stats to
 page_pool_put_page_bulk

On Tue, Apr 05, 2022 at 10:52:55PM +0200, Lorenzo Bianconi wrote:
> Add missing recycle stats to page_pool_put_page_bulk routine.

Thanks for proposing this change. I did miss this path when adding
stats.

I'm sort of torn on this. It almost seems that we might want to track
bulking events separately as their own stat.

Maybe Ilias has an opinion on this; I did implement the stats, but I'm not
a maintainer of the page_pool so I'm not sure what I think matters all
that much ;) 

> Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> ---
>  net/core/page_pool.c | 15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 1943c0f0307d..4af55d28ffa3 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -36,6 +36,12 @@
>  		this_cpu_inc(s->__stat);						\
>  	} while (0)
>  
> +#define recycle_stat_add(pool, __stat, val)						\
> +	do {										\
> +		struct page_pool_recycle_stats __percpu *s = pool->recycle_stats;	\
> +		this_cpu_add(s->__stat, val);						\
> +	} while (0)
> +
>  bool page_pool_get_stats(struct page_pool *pool,
>  			 struct page_pool_stats *stats)
>  {
> @@ -63,6 +69,7 @@ EXPORT_SYMBOL(page_pool_get_stats);
>  #else
>  #define alloc_stat_inc(pool, __stat)
>  #define recycle_stat_inc(pool, __stat)
> +#define recycle_stat_add(pool, __stat, val)
>  #endif
>  
>  static int page_pool_init(struct page_pool *pool,
> @@ -566,9 +573,13 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
>  	/* Bulk producer into ptr_ring page_pool cache */
>  	page_pool_ring_lock(pool);
>  	for (i = 0; i < bulk_len; i++) {
> -		if (__ptr_ring_produce(&pool->ring, data[i]))
> -			break; /* ring full */
> +		if (__ptr_ring_produce(&pool->ring, data[i])) {
> +			/* ring full */
> +			recycle_stat_inc(pool, ring_full);
> +			break;
> +		}
>  	}
> +	recycle_stat_add(pool, ring, i);

If we do go with this approach (instead of adding bulking-specific stats),
we might want to replicate this change in __page_pool_alloc_pages_slow; we
currently only count the single allocation returned by the slow path, but
the rest of the pages which refilled the cache are not counted.

>  	page_pool_ring_unlock(pool);
>  
>  	/* Hopefully all pages was return into ptr_ring */
> -- 
> 2.35.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ