[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160209174102.7a2a1aee@xeon-e3>
Date: Tue, 9 Feb 2016 17:41:02 -0800
From: Stephen Hemminger <stephen@...workplumber.org>
To: Jarod Wilson <jarod@...hat.com>
Cc: Jamal Hadi Salim <jhs@...atatu.com>,
David Miller <davem@...emloft.net>, eric.dumazet@...il.com,
linux-kernel@...r.kernel.org, edumazet@...gle.com,
jiri@...lanox.com, daniel@...earbox.net, tom@...bertland.com,
j.vosburgh@...il.com, vfalico@...il.com, gospo@...ulusnetworks.com,
netdev@...r.kernel.org
Subject: Re: [PATCH net-next iproute2] iplink: display rx nohandler stats
On Tue, 9 Feb 2016 18:51:35 -0500
Jarod Wilson <jarod@...hat.com> wrote:
> On Tue, Feb 09, 2016 at 11:17:57AM -0800, Stephen Hemminger wrote:
> > Support for the new rx_nohandler statistic.
> > This code is designed to handle the case where the kernel reported statistic
> > structure is smaller than the larger structure in later releases (and vice versa).
>
> This seems to work here, for the most part. However, if you are running a
> kernel with the new counter, and the counter happens to contain 0, aren't
> we going to not print anything?
That is the desirable outcome, since if run on older system the
output format will not change from current format.
> I've got a tweaked version here locally that gets a touch messy, where I
> get a count of members from RTA_DATA(IFLA_STATS{,64} / sizeof(__u{32,64}),
> pass that into the print functions, and key off that length for whether or
> not to print the extra members, so they'll show up even when 0, if they're
> supported. This does rely on strict ordering of the struct members, no
> reordering, no removals, etc., but I think everyone is already in favor of
> that. Looks like the same sort of length checks could be used for
> rx_compressed and tx_compressed as well, as I think they fall victim to
> the same issue of not printing if those counters are legitimately 0. Yes,
> it's a little uglier, and more brittle, but more accurate output.
>
I don't like the added complexity.
Powered by blists - more mailing lists