[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230818170121.3112bb0a@kernel.org>
Date: Fri, 18 Aug 2023 17:01:21 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Przemek Kitszel <przemyslaw.kitszel@...el.com>
Cc: Tony Nguyen <anthony.l.nguyen@...el.com>, <davem@...emloft.net>,
<pabeni@...hat.com>, <edumazet@...gle.com>, <netdev@...r.kernel.org>, Alan
Brady <alan.brady@...el.com>, <pavan.kumar.linga@...el.com>,
<emil.s.tantilov@...el.com>, <jesse.brandeburg@...el.com>,
<sridhar.samudrala@...el.com>, <shiraz.saleem@...el.com>,
<sindhu.devale@...el.com>, <willemb@...gle.com>, <decot@...gle.com>,
<andrew@...n.ch>, <leon@...nel.org>, <mst@...hat.com>,
<simon.horman@...igine.com>, <shannon.nelson@....com>,
<stephen@...workplumber.org>, Alice Michael <alice.michael@...el.com>,
"Joshua Hay" <joshua.a.hay@...el.com>, Phani Burra
<phani.r.burra@...el.com>
Subject: Re: [PATCH net-next v5 14/15] idpf: add ethtool callbacks
On Sat, 19 Aug 2023 00:42:56 +0200 Przemek Kitszel wrote:
> I see that here we (Intel) attempt for the first time to propose our
> "Unified stats" naming scheme [1].
>
> Purpose is to have:
> - common naming scheme (at least for the ice we have patch ~ready);
> - less "customer frustration";
> - easier job for analytical scripts, copying from wiki:
> | The naming schema was created to be human readable and easily parsed
> | by an analytic engine (such as a script or other entity).
> | All statistic strings will be comprised of three components:
> | @Where, @Instance and @Units. Each of these components is separated
> | by an underscore "_"; if a component is comprised of more than one
> | word, then those words are separated by a dash "-".
> |
> | An example statistic that shows this is xdp-rx-dropped_q-23_packets.
> | In this case the @where is xdp-rx-dropped, the @instance is q-32 and
> | the @unit is packets.
That is one of the two main problems with the ethtool -S stats,
everyone comes up with new "common" standards, endlessly.
Queue the "Standards" xkcd.
Once we have a netlink GET for queues we can plonk the per queue stats
there pretty easily.
Powered by blists - more mailing lists