lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yk4j4YCVuOLK/1uE@lunn.ch>
Date:   Thu, 7 Apr 2022 01:36:01 +0200
From:   Andrew Lunn <andrew@...n.ch>
To:     Joe Damato <jdamato@...tly.com>
Cc:     Lorenzo Bianconi <lorenzo@...nel.org>, netdev@...r.kernel.org,
        lorenzo.bianconi@...hat.com, davem@...emloft.net, kuba@...nel.org,
        pabeni@...hat.com, thomas.petazzoni@...tlin.com,
        linux@...linux.org.uk, jbrouer@...hat.com,
        ilias.apalodimas@...aro.org
Subject: Re: [PATCH net-next] net: mvneta: add support for page_pool_get_stats

On Wed, Apr 06, 2022 at 04:01:37PM -0700, Joe Damato wrote:
> On Wed, Apr 06, 2022 at 04:02:44PM +0200, Lorenzo Bianconi wrote:
> > > > +static void mvneta_ethtool_update_pp_stats(struct mvneta_port *pp,
> > > > +					   struct page_pool_stats *stats)
> > > > +{
> > > > +	int i;
> > > > +
> > > > +	memset(stats, 0, sizeof(*stats));
> > > > +	for (i = 0; i < rxq_number; i++) {
> > > > +		struct page_pool *page_pool = pp->rxqs[i].page_pool;
> > > > +		struct page_pool_stats pp_stats = {};
> > > > +
> > > > +		if (!page_pool_get_stats(page_pool, &pp_stats))
> > > > +			continue;
> > > > +
> > > > +		stats->alloc_stats.fast += pp_stats.alloc_stats.fast;
> > > > +		stats->alloc_stats.slow += pp_stats.alloc_stats.slow;
> > > > +		stats->alloc_stats.slow_high_order +=
> > > > +			pp_stats.alloc_stats.slow_high_order;
> > > > +		stats->alloc_stats.empty += pp_stats.alloc_stats.empty;
> > > > +		stats->alloc_stats.refill += pp_stats.alloc_stats.refill;
> > > > +		stats->alloc_stats.waive += pp_stats.alloc_stats.waive;
> > > > +		stats->recycle_stats.cached += pp_stats.recycle_stats.cached;
> > > > +		stats->recycle_stats.cache_full +=
> > > > +			pp_stats.recycle_stats.cache_full;
> > > > +		stats->recycle_stats.ring += pp_stats.recycle_stats.ring;
> > > > +		stats->recycle_stats.ring_full +=
> > > > +			pp_stats.recycle_stats.ring_full;
> > > > +		stats->recycle_stats.released_refcnt +=
> > > > +			pp_stats.recycle_stats.released_refcnt;
> > > 
> > > Am i right in saying, these are all software stats? They are also
> > > generic for any receive queue using the page pool?
> > 
> > yes, these stats are accounted by the kernel so they are sw stats, but I guess
> > xdp ones are sw as well, right?
> > 
> > > 
> > > It seems odd the driver is doing the addition here. Why not pass stats
> > > into page_pool_get_stats()? That will make it easier when you add
> > > additional statistics?
> > > 
> > > I'm also wondering if ethtool -S is even the correct API. It should be
> > > for hardware dependent statistics, those which change between
> > > implementations. Where as these statistics should be generic. Maybe
> > > they should be in /sys/class/net/ethX/statistics/ and the driver
> > > itself is not even involved, the page pool code implements it?
> > 
> > I do not have a strong opinion on it, but I can see an issue for some drivers
> > (e.g. mvpp2 iirc) where page_pools are not specific for each net_device but are shared
> > between multiple ports, so maybe it is better to allow the driver to decide how
> > to report them. What do you think?
> 
> When I did the implementation of this API the feedback was essentially
> that the drivers should be responsible for reporting the stats of their
> active page_pool structures; this is why the first driver to use this
> (mlx5) uses the API and outputs the stats via ethtool -S.
> 
> I have no strong preference, either, but I think that exposing the stats
> via an API for the drivers to consume is less tricky; the driver knows
> which page_pools are active and which pool is associated with which
> RX-queue, and so on.
> 
> If there is general consensus for a different approach amongst the
> page_pool maintainers, I am happy to implement it.

If we keep it in the drivers, it would be good to try to move some of
the code into the core, to keep cut/paste to a minimum. We want the
same strings for every driver for example, and it looks like it is
going to be hard to add new counters, since you will need to touch
every driver using the page pool.

Maybe even consider adding ETH_SS_PAGE_POOL. You can then put
page_pool_get_sset_count() and page_pool_get_sset_strings() as helpers
in the core, and the driver just needs to implement the get_stats()
part, again with a helper in the core which can do most of the work.

       Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ