lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 5 Feb 2019 20:45:02 -0800
From:   Jakub Kicinski <jakub.kicinski@...ronome.com>
To:     Roopa Prabhu <roopa@...ulusnetworks.com>
Cc:     David Miller <davem@...emloft.net>, oss-drivers@...ronome.com,
        netdev <netdev@...r.kernel.org>,
        Jiří Pírko <jiri@...nulli.us>,
        Florian Fainelli <f.fainelli@...il.com>,
        Andrew Lunn <andrew@...n.ch>,
        Michal Kubecek <mkubecek@...e.cz>,
        David Ahern <dsahern@...il.com>,
        Simon Horman <simon.horman@...ronome.com>,
        "Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
        Maciek Fijałkowski 
        <maciejromanfijalkowski@...il.com>,
        vasundhara-v.volam@...adcom.com,
        Michael Chan <michael.chan@...adcom.com>, shalomt@...lanox.com,
        Ido Schimmel <idosch@...lanox.com>
Subject: Re: [RFC 00/14] netlink/hierarchical stats

On Sat, 2 Feb 2019 15:14:44 -0800, Roopa Prabhu wrote:
> On Thu, Jan 31, 2019 at 11:31 AM Jakub Kicinski wrote:
> > On Thu, 31 Jan 2019 08:31:51 -0800, Roopa Prabhu wrote:  
> > > On Thu, Jan 31, 2019 at 8:16 AM Roopa Prabhu wrote:  
> > > > On Wed, Jan 30, 2019 at 4:24 PM Jakub Kicinski wrote:  
> > > > > On Wed, 30 Jan 2019 14:14:34 -0800, Roopa Prabhu wrote:
> > > > >
> > > > > My thinking was that we should leave truly custom/strange stats to
> > > > > ethtool API which works quite well for that and at the same time be
> > > > > very accepting of people adding new IDs to HSTAT (only requirement is
> > > > > basically defining the meaning very clearly).  
> > > >
> > > > that sounds reasonable. But the 'defining meaning clearly' gets tricky
> > > > sometimes.
> > > > The vendor who gets their ID or meaning first wins :) and the rest
> > > > will have to live with
> > > > ethtool and explain to rest of the world that ethtool is more reliable
> > > > for their hardware :)  
> >
> > Right, that's the trade off inherent to standardization.  I don't see
> > any way to work around the fact that the definition may not fit all.
> >
> > What I want as a end user and what I want for my customers is the
> > ability to switch the NIC on their system and not spend two months
> > "integrating" into their automation :(  If the definition of statistics
> > is not solid we're back to square one.  
> 
> agree. And I am with you on standardizing them.
> 
> >  
> > > > I am also concerned that this getting the ID into common HSTAT ID
> > > > space will  slow down the process of adding new counters
> > > > for vendors. Which will lead to vendors sticking with ethtool API.  
> >
> > I feel like whatever we did here will end up looking much like the
> > ethtool interface, which is why I decided to leave that part out.
> > Ethtool -S works pretty well for custom stats.  Standard and structured
> > stats don't fit with it in any way, the two seem best left separate.  
> 
> understand the fear. My only point was getting them together in a
> single API is better so that they don't get developed separately and
> we don't end up with duplicate stats code.
> 
> >  
> > > > It would be great if people can get all stats in one place and not
> > > > rely on another API for 'more'.  
> >
> > One place in the driver or for the user?  I'm happy to add the code to
> > ethtool to also dump hstats and render them in a standard way.  In fact
> > the tool I have for testing has a "simplified" output format which
> > looks exactly like ethtool -S.
> >
> > One place for the driver to report is hard, as I said I think the
> > custom stats are best left with ethtool.  Adding an extra incentive to
> > standardize.
> >  
> > > > > For the first stab I looked at two drivers and added all the stats that
> > > > > were common.
> > > > >
> > > > > Given this set is identifying statistics by ID - how would we make that
> > > > > extensible to drivers?  Would we go back to strings or have some
> > > > > "driver specific" ID space?  
> > > >
> > > > I was looking for ideas from you really, to see if you had considered
> > > > this. agree per driver ID space seems ugly.
> > > > ethtool strings are great today...if we can control the duplication.
> > > > But thinking some more..., i did see some
> > > > patches recently for vendor specific parameter (with ID) space in
> > > > devlink. maybe something like that will be
> > > > reasonable ?  
> >
> > I thought about this for a year and I basically came to the conclusion
> > I can't find any perfect solution, if there is one.
> >
> > The devlink parameters are useful, but as anticipated they became the
> > laziest excuse of an ABI... Don't get me started ;)
> >  
> > > > > Is there any particular type of statistic you'd expect drivers to want
> > > > > to add?  For NICs I think IEEE/RMON should pretty much cover the
> > > > > silicon ones, but I don't know much about switches :)  
> > > >
> > > > I will have to go through the list. But switch asics do support
> > > > flexible stats/counters that can be attached at various points.
> > > > And new chip versions come with more support. Having that flexibility
> > > > to expose/extend such stats incrementally is very valuable on a per
> > > > hardware/vendor basis.  
> >
> > Yes, I'm not too familiar with those counters.  Do they need to be
> > enabled to start counting?  
> 
> yes correct.
> 
> > Do they have performance impact?  
> 
> I have not heard of any performance impact...but they are not enabled
> by default because of limited counter resource pool.

I see.. I'd personally see that as something that we probably either
support via perf, or new devlink perf creation.  Those are perf events,
not stats to me.  Devlink would probably suit fixed HW better, and
perf could feel slightly more natural to certain NICs (*ekhm* perf
traces of offloaded BPF programs).

> > Can the
> > "sample" events perf-style?  
> 
> I don't think so
> 
> > How is the condition on which they trigger
> > defined?  Is it maybe just "match a packet and increment a counter"?  
> 
> yes, something like that.
> 
> > Would such counters benefit from hierarchical structure?  
> 
> hmm not sure.
> 
> 
> One thing though, for most of these flexible counters and their
> attachment points in hardware, we can count them on logical devices or
> other objects in software like per vlan, vni, route stats etc.
> 
> >
> > I was trying to cover the long standing use cases - namely the
> > IEEE/RMON stats which all MAC have had for years and per queue stats
> > which all drivers have had for years.  But if we can cater to more
> > cases I'm open.
> >  
> > > Just want to clarify that I am suggesting a nested HSTATS extension
> > > infra for drivers (just like ethtool).
> > > 'Common stats' stays at the top-level.  
> >
> > I got a concept of groups here.  The dump generally looks like this:
> >
> > [root group A (say MAC stats)]
> >   [sub group RX]
> >   [sub group TX]
> > [root group B (say PCIe stats)]
> >   [sub group RX]
> >   [sub group TX]
> > [root group C (say per-q driver stats]
> >   [sub group RX]
> >     [q1 group]
> >     [q2 group]
> >     [q3 group]
> >   [sub group TX]
> >     [q1 group]
> >     [q2 group]
> >     [q3 group]
> >
> > Each root group representing a "point in the pipeline".
> >
> > So it's not too hard to add a root group with whatever, the questions
> > are move how would it benefit over existing ethtool if the stats are
> > custom anyway?  Hm..  
> 
> It wouldn't. I am only saying that the netlink stats API is the new
> place to move stats.
> Ethtool stats will have to move to netlink some day and I don't see a
> need to draw a hardline on saying no to ethtool custom stats moving to
> the netlink based common stats API. And unless there is a good
> migration path for a new hardware stats API that is all inclusive,
> there is a higher chance of continued development on the older
> hardware stats API.
> I have no objections to having a standard set of stats (this is
> essentially what we have for software stats too).
> 
> I don't want to block your series from going forward without HW custom
> stats extensions. And IIUC your API is extensible and does not
> preclude anyone from adding the ability to include HW custom stats
> extensions in the future with enough justification. That is good for
> me.

Would you be more interested in seeing the similarity in API on the
driver side or on the netlink side?  I was hoping to leave the legacy
stats in ethtool (soon to be running over netlink as well) for the
time being.  I wish we had some form of library on the iproute2 side 
we could evolve together with the kernel libbpf-style :(

> To take a random example, we expose the following stats on our
> switches via ethtool. I have not used them personally but they
> correspond to respective hardware counters. Is there any room for such
> stats in the new HSTATS netlink API or they will have to stick to
> ethtool ?
> I believe people will need per-queue counters for this.
> 
>      HwIfOutWredDrops: 0
>      HwIfOutQ0WredDrops: 0
>      HwIfOutQ1WredDrops: 0
>      HwIfOutQ2WredDrops: 0
>      HwIfOutQ3WredDrops: 0
>      HwIfOutQ4WredDrops: 0
>      HwIfOutQ5WredDrops: 0
>      HwIfOutQ6WredDrops: 0
>      HwIfOutQ7WredDrops: 0
>      HwIfOutQ8WredDrops: 0
>      HwIfOutQ9WredDrops: 0

Well, yes, so these are clearly enough defined stats, and I'd be very
happy to add an ID for you for those... if those shouldn't be reported
in the tc qdisc red stats that should be used to configure WRED :(

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ