[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yk86vuqcCOZVxgOe@lunn.ch>
Date: Thu, 7 Apr 2022 21:25:50 +0200
From: Andrew Lunn <andrew@...n.ch>
To: Ilias Apalodimas <ilias.apalodimas@...aro.org>
Cc: Lorenzo Bianconi <lorenzo@...nel.org>, netdev@...r.kernel.org,
lorenzo.bianconi@...hat.com, davem@...emloft.net, kuba@...nel.org,
pabeni@...hat.com, thomas.petazzoni@...tlin.com,
jbrouer@...hat.com, jdamato@...tly.com
Subject: Re: [RFC net-next 2/2] net: mvneta: add support for
page_pool_get_stats
On Thu, Apr 07, 2022 at 09:35:52PM +0300, Ilias Apalodimas wrote:
> Hi Andrew,
>
> On Thu, 7 Apr 2022 at 21:25, Andrew Lunn <andrew@...n.ch> wrote:
> >
> > > +static void mvneta_ethtool_pp_stats(struct mvneta_port *pp, u64 *data)
> > > +{
> > > + struct page_pool_stats stats = {};
> > > + int i;
> > > +
> > > + for (i = 0; i < rxq_number; i++) {
> > > + struct page_pool *page_pool = pp->rxqs[i].page_pool;
> > > + struct page_pool_stats pp_stats = {};
> > > +
> > > + if (!page_pool_get_stats(page_pool, &pp_stats))
> > > + continue;
> > > +
> > > + stats.alloc_stats.fast += pp_stats.alloc_stats.fast;
> > > + stats.alloc_stats.slow += pp_stats.alloc_stats.slow;
> > > + stats.alloc_stats.slow_high_order +=
> > > + pp_stats.alloc_stats.slow_high_order;
> > > + stats.alloc_stats.empty += pp_stats.alloc_stats.empty;
> > > + stats.alloc_stats.refill += pp_stats.alloc_stats.refill;
> > > + stats.alloc_stats.waive += pp_stats.alloc_stats.waive;
> > > + stats.recycle_stats.cached += pp_stats.recycle_stats.cached;
> > > + stats.recycle_stats.cache_full +=
> > > + pp_stats.recycle_stats.cache_full;
> > > + stats.recycle_stats.ring += pp_stats.recycle_stats.ring;
> > > + stats.recycle_stats.ring_full +=
> > > + pp_stats.recycle_stats.ring_full;
> > > + stats.recycle_stats.released_refcnt +=
> > > + pp_stats.recycle_stats.released_refcnt;
> >
> > We should be trying to remove this sort of code from the driver, and
> > put it all in the core. It wants to be something more like:
> >
> > struct page_pool_stats stats = {};
> > int i;
> >
> > for (i = 0; i < rxq_number; i++) {
> > struct page_pool *page_pool = pp->rxqs[i].page_pool;
> >
> > if (!page_pool_get_stats(page_pool, &stats))
> > continue;
> >
> > page_pool_ethtool_stats_get(data, &stats);
> >
> > Let page_pool_get_stats() do the accumulate as it puts values in stats.
>
> Unless I misunderstand this, I don't think that's doable in page pool.
> That means page pool is aware of what stats to accumulate per driver
> and I certainly don't want anything driver specific to creep in there.
> The driver knows the number of pools he is using and he can gather
> them all together.
I agree that the driver knows about the number of pools. For mvneta,
there is one per RX queue. Which is this part of my suggestion
> > for (i = 0; i < rxq_number; i++) {
> > struct page_pool *page_pool = pp->rxqs[i].page_pool;
> >
However, it has no idea about the stats themselves. They are purely a
construct of the page pool. Hence the next part of my suggest, ask the
page pool for the stats, place them into stats, doing the accumulate
at the same time.:
> > if (!page_pool_get_stats(page_pool, &stats))
> > continue;
and now we have the accumulated stats, turn them into ethtool format:
> > page_pool_ethtool_stats_get(data, &stats);
Where do you see any driver knowledge required in either of
page_pool_get_stats() or page_pool_ethtool_stats_get().
Andrew
Powered by blists - more mailing lists