[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAC_iWjJdPvhd5Py5vWqWtbf16eJZfg_NWU=BBM90302mSZA=sQ@mail.gmail.com>
Date: Thu, 7 Apr 2022 23:14:15 +0300
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Joe Damato <jdamato@...tly.com>
Cc: Lorenzo Bianconi <lorenzo@...nel.org>, netdev@...r.kernel.org,
lorenzo.bianconi@...hat.com, davem@...emloft.net, kuba@...nel.org,
pabeni@...hat.com, jbrouer@...hat.com
Subject: Re: [PATCH net-next] page_pool: Add recycle stats to page_pool_put_page_bulk
Hi Joe,
On Thu, 7 Apr 2022 at 02:15, Joe Damato <jdamato@...tly.com> wrote:
>
> On Tue, Apr 05, 2022 at 10:52:55PM +0200, Lorenzo Bianconi wrote:
> > Add missing recycle stats to page_pool_put_page_bulk routine.
>
> Thanks for proposing this change. I did miss this path when adding
> stats.
>
> I'm sort of torn on this. It almost seems that we might want to track
> bulking events separately as their own stat.
>
> Maybe Ilias has an opinion on this; I did implement the stats, but I'm not
> a maintainer of the page_pool so I'm not sure what I think matters all
> that much ;)
It does. In fact I think people that actually use the stats for
something have a better understanding on what's useful and what's not.
OTOH page_pool_put_page_bulk() is used on the XDP path for now but it
ends up returning pages on a for loop. So personally I think we are
fine without it. The page will be either returned to the ptr_ring
cache or be free'd and we account for both of those.
However looking at the code I noticed another issue.
__page_pool_alloc_pages_slow() increments the 'slow' stat by one. But
we are not only allocating a single page in there we allocate nr_pages
and we feed all of them but one to the cache. So imho here we should
bump the slow counter appropriately. The next allocations will
probably be served from the cache and they will get their own proper
counters.
>
> > Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> > ---
> > net/core/page_pool.c | 15 +++++++++++++--
> > 1 file changed, 13 insertions(+), 2 deletions(-)
> >
> > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > index 1943c0f0307d..4af55d28ffa3 100644
> > --- a/net/core/page_pool.c
> > +++ b/net/core/page_pool.c
> > @@ -36,6 +36,12 @@
> > this_cpu_inc(s->__stat); \
> > } while (0)
> >
> > +#define recycle_stat_add(pool, __stat, val) \
> > + do { \
> > + struct page_pool_recycle_stats __percpu *s = pool->recycle_stats; \
> > + this_cpu_add(s->__stat, val); \
> > + } while (0)
> > +
> > bool page_pool_get_stats(struct page_pool *pool,
> > struct page_pool_stats *stats)
> > {
> > @@ -63,6 +69,7 @@ EXPORT_SYMBOL(page_pool_get_stats);
> > #else
> > #define alloc_stat_inc(pool, __stat)
> > #define recycle_stat_inc(pool, __stat)
> > +#define recycle_stat_add(pool, __stat, val)
> > #endif
> >
> > static int page_pool_init(struct page_pool *pool,
> > @@ -566,9 +573,13 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
> > /* Bulk producer into ptr_ring page_pool cache */
> > page_pool_ring_lock(pool);
> > for (i = 0; i < bulk_len; i++) {
> > - if (__ptr_ring_produce(&pool->ring, data[i]))
> > - break; /* ring full */
> > + if (__ptr_ring_produce(&pool->ring, data[i])) {
> > + /* ring full */
> > + recycle_stat_inc(pool, ring_full);
> > + break;
> > + }
> > }
> > + recycle_stat_add(pool, ring, i);
>
> If we do go with this approach (instead of adding bulking-specific stats),
> we might want to replicate this change in __page_pool_alloc_pages_slow; we
> currently only count the single allocation returned by the slow path, but
> the rest of the pages which refilled the cache are not counted.
Ah yes we are saying the same thing here
Thanks
/Ilias
>
> > page_pool_ring_unlock(pool);
> >
> > /* Hopefully all pages was return into ptr_ring */
> > --
> > 2.35.1
> >
Powered by blists - more mailing lists