[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YfVKDxenS5IWxCLX@hades>
Date: Sat, 29 Jan 2022 16:07:11 +0200
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Joe Damato <jdamato@...tly.com>
Cc: netdev@...r.kernel.org, kuba@...nel.org, davem@...emloft.net,
hawk@...nel.org
Subject: Re: [PATCH net-next 0/6] net: page_pool: Add page_pool stat counters
Hi Joe!
On Thu, Jan 27, 2022 at 03:55:03PM -0800, Joe Damato wrote:
> On Thu, Jan 27, 2022 at 1:08 AM Ilias Apalodimas
> <ilias.apalodimas@...aro.org> wrote:
> >
> > Hi Joe,
> >
> > On Wed, Jan 26, 2022 at 02:48:14PM -0800, Joe Damato wrote:
> > > Greetings:
> > >
> > > This series adds some stat counters for the page_pool allocation path which
> > > help to track:
> > >
> > > - fast path allocations
> > > - slow path order-0 allocations
> > > - slow path high order allocations
> > > - refills which failed due to an empty ptr ring, forcing a slow
> > > path allocation
> > > - allocations fulfilled via successful refill
> > > - pages which cannot be added to the cache because of numa mismatch
> > > (i.e. waived)
> > >
> >
> > Thanks for the patch. Stats are something that's indeed missing from the
> > API. The patch should work for Rx based allocations (which is what you
> > currently cover), since the RX side is usually protected by NAPI. However
> > we've added a few features recently, which we would like to have stats on.
>
> Thanks for taking a look at the patch.
>
yw
> > commit 6a5bcd84e886("page_pool: Allow drivers to hint on SKB recycling"),
> > introduces recycling capabilities on the API. I think it would be far more
> > interesting to be able to extend the statistics to recycled/non-recycled
> > packets as well in the future.
>
> I agree. Tracking recycling events would be both helpful and
> interesting, indeed.
>
> > But the recycling is asynchronous and we
> > can't add locks just for the sake of accurate statistics.
>
> Agreed.
>
> > Can we instead
> > convert that to a per-cpu structure for producers?
>
> If my understanding of your proposal is accurate, moving the stats
> structure to a per-cpu structure (instead of per-pool) would add
> ambiguity as to the performance of a specific driver's page pool. In
> exchange for the ambiguity, though, we'd get stats for additional
> events, which could be interesting.
I was mostly thinking per pool using with 'struct percpu_counter' or
allocate __percpu variables, but I haven't really checked if that's doable or
which of those is better suited for our case.
>
> It seems like under load it might be very useful to know that a
> particular driver's page pool is adding pressure to the buddy
> allocator in the slow path. I suppose that a user could move softirqs
> around on their system to alleviate some of the ambiguity and perhaps
> that is good enough.
>
[...]
Cheers
/Ilias
Powered by blists - more mailing lists