lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAC_iWjJv2uSHXmHkCvA+0Cbx=NT=jBi48vaow9xLgz4FmrF_wA@mail.gmail.com>
Date:   Thu, 7 Apr 2022 23:19:08 +0300
From:   Ilias Apalodimas <ilias.apalodimas@...aro.org>
To:     Andrew Lunn <andrew@...n.ch>
Cc:     Lorenzo Bianconi <lorenzo@...nel.org>, netdev@...r.kernel.org,
        lorenzo.bianconi@...hat.com, davem@...emloft.net, kuba@...nel.org,
        pabeni@...hat.com, thomas.petazzoni@...tlin.com,
        jbrouer@...hat.com, jdamato@...tly.com
Subject: Re: [RFC net-next 2/2] net: mvneta: add support for page_pool_get_stats

Hi Andrew,

[...]

> > > > +
> > > > +             stats.alloc_stats.fast += pp_stats.alloc_stats.fast;
> > > > +             stats.alloc_stats.slow += pp_stats.alloc_stats.slow;
> > > > +             stats.alloc_stats.slow_high_order +=
> > > > +                     pp_stats.alloc_stats.slow_high_order;
> > > > +             stats.alloc_stats.empty += pp_stats.alloc_stats.empty;
> > > > +             stats.alloc_stats.refill += pp_stats.alloc_stats.refill;
> > > > +             stats.alloc_stats.waive += pp_stats.alloc_stats.waive;
> > > > +             stats.recycle_stats.cached += pp_stats.recycle_stats.cached;
> > > > +             stats.recycle_stats.cache_full +=
> > > > +                     pp_stats.recycle_stats.cache_full;
> > > > +             stats.recycle_stats.ring += pp_stats.recycle_stats.ring;
> > > > +             stats.recycle_stats.ring_full +=
> > > > +                     pp_stats.recycle_stats.ring_full;
> > > > +             stats.recycle_stats.released_refcnt +=
> > > > +                     pp_stats.recycle_stats.released_refcnt;
> > >
> > > We should be trying to remove this sort of code from the driver, and
> > > put it all in the core.  It wants to be something more like:
> > >
> > >         struct page_pool_stats stats = {};
> > >         int i;
> > >
> > >         for (i = 0; i < rxq_number; i++) {
> > >                 struct page_pool *page_pool = pp->rxqs[i].page_pool;
> > >
> > >                 if (!page_pool_get_stats(page_pool, &stats))
> > >                         continue;
> > >
> > >         page_pool_ethtool_stats_get(data, &stats);
> > >
> > > Let page_pool_get_stats() do the accumulate as it puts values in stats.
> >
> > Unless I misunderstand this, I don't think that's doable in page pool.
> > That means page pool is aware of what stats to accumulate per driver
> > and I certainly don't want anything driver specific to creep in there.
> > The driver knows the number of pools he is using and he can gather
> > them all together.
>
> I agree that the driver knows about the number of pools. For mvneta,
> there is one per RX queue. Which is this part of my suggestion
>
> > >         for (i = 0; i < rxq_number; i++) {
> > >                 struct page_pool *page_pool = pp->rxqs[i].page_pool;
> > >
>
> However, it has no idea about the stats themselves. They are purely a
> construct of the page pool. Hence the next part of my suggest, ask the
> page pool for the stats, place them into stats, doing the accumulate
> at the same time.:
>
> > >                 if (!page_pool_get_stats(page_pool, &stats))
> > >                         continue;
>
> and now we have the accumulated stats, turn them into ethtool format:
>
> > >         page_pool_ethtool_stats_get(data, &stats);
>
> Where do you see any driver knowledge required in either of
> page_pool_get_stats() or page_pool_ethtool_stats_get().

Indeed I read the first mail wrong. I thought you wanted page_pool it
self to account for the driver stats without passing a 'struct
page_pool *pool' to page_pool_get_stats(). In a system with XDP (which
also uses page_pool) or multiple drivers using that, it would require
some metadata fed into the page pool subsystem to reason about which
pools to accumulate.

The code snip you included seems fine.

Thanks
/Ilias

>
>       Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ