lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 23 Feb 2022 17:05:51 +0100
From:   Jesper Dangaard Brouer <jbrouer@...hat.com>
To:     Joe Damato <jdamato@...tly.com>, netdev@...r.kernel.org,
        kuba@...nel.org, ilias.apalodimas@...aro.org, davem@...emloft.net,
        hawk@...nel.org, saeed@...nel.org, ttoukan.linux@...il.com
Cc:     brouer@...hat.com
Subject: Re: [net-next v6 1/2] page_pool: Add page_pool stats



On 23/02/2022 01.00, Joe Damato wrote:
> Add per-cpu per-pool statistics counters for the allocation path of a page
> pool.
> 
> This code is disabled by default and a kernel config option is provided for
> users who wish to enable them.
> 
> The statistics added are:
> 	- fast: successful fast path allocations
> 	- slow: slow path order-0 allocations
> 	- slow_high_order: slow path high order allocations
> 	- empty: ptr ring is empty, so a slow path allocation was forced.
> 	- refill: an allocation which triggered a refill of the cache
> 	- waive: pages obtained from the ptr ring that cannot be added to
> 	  the cache due to a NUMA mismatch.
> 
> Signed-off-by: Joe Damato <jdamato@...tly.com>
> ---
>   include/net/page_pool.h | 18 ++++++++++++++++++
>   net/Kconfig             | 13 +++++++++++++
>   net/core/page_pool.c    | 37 +++++++++++++++++++++++++++++++++----
>   3 files changed, 64 insertions(+), 4 deletions(-)
> 
> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> index 97c3c19..bedc82f 100644
> --- a/include/net/page_pool.h
> +++ b/include/net/page_pool.h
> @@ -135,7 +135,25 @@ struct page_pool {
>   	refcount_t user_cnt;
>   
>   	u64 destroy_cnt;
> +#ifdef CONFIG_PAGE_POOL_STATS
> +	struct page_pool_stats __percpu *stats ____cacheline_aligned_in_smp;
> +#endif
> +};

Adding this to the end of the struct and using attribute 
____cacheline_aligned_in_smp cause the structure have a lot of wasted 
padding in the end.

I recommend using the tool pahole to see the struct layout.


> +
> +#ifdef CONFIG_PAGE_POOL_STATS
> +struct page_pool_stats {
> +	struct {
> +		u64 fast; /* fast path allocations */
> +		u64 slow; /* slow-path order 0 allocations */
> +		u64 slow_high_order; /* slow-path high order allocations */
> +		u64 empty; /* failed refills due to empty ptr ring, forcing
> +			    * slow path allocation
> +			    */
> +		u64 refill; /* allocations via successful refill */
> +		u64 waive;  /* failed refills due to numa zone mismatch */
> +	} alloc;
>   };
> +#endif

All of these stats are for page_pool allocation "RX" side, which is 
protected by softirq/NAPI.
Thus, I find it unnecessary to do __percpu stats.


As Ilias have pointed out-before, the __percpu stats (first) becomes 
relevant once we want stats for the free/"return" path ... which is not 
part of this patchset.

--Jesper

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ