[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <15ba74a8-2fa0-6e57-cb6c-4a6e0f24e2ea@solarflare.com>
Date: Thu, 23 May 2019 18:09:55 +0100
From: Edward Cree <ecree@...arflare.com>
To: Jakub Kicinski <jakub.kicinski@...ronome.com>
CC: Jamal Hadi Salim <jhs@...atatu.com>, Jiri Pirko <jiri@...nulli.us>,
"Pablo Neira Ayuso" <pablo@...filter.org>,
David Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Cong Wang <xiyou.wangcong@...il.com>,
"Andy Gospodarek" <andy@...yhouse.net>,
Michael Chan <michael.chan@...adcom.com>,
Vishal Kulkarni <vishal@...lsio.com>
Subject: Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action
statistics
On 23/05/2019 17:33, Jakub Kicinski wrote:
> On Thu, 23 May 2019 17:21:49 +0100, Edward Cree wrote:
>> Well, patch #2 updates drivers to the changed API, which is kinda an
>> "upstream user" if you squint... admittedly patch #1 is a bit dubious
>> in that regard;
> Both 1 and 2 are dubious if you ask me. Complexity added for no
> upstream gain.
Just to explain the pickle I'm in: when this hw finally ships, we intend
to both submit a driver upstream and also provide an out-of-tree driver
that builds on older kernels (same as we do with existing hardware). Now
there are already some kernel releases (5.1 and, when it comes out of -rc,
5.2) on which we'll just have to say "yeah, shared actions don't work here
and you'll get slightly-bogus stats if you use them". But I'd like to
minimise the set of kernels on which that's the case.
I realise that from a strict/absolutist perspective, that's "not the
upstream kernel's problem"; but at the same time it's qualitatively
different from, say, a vendor asking for hooks for a proprietary driver
that will *never* be upstream. Does a strict interpretation of the rules
here really make for a better kernel? There *will* be an upstream gain
eventually (when our driver goes in), just not in this release; it's not
like the kernel's never taken patches on that basis before (with a
specific use in mind, i.e. distinct from "oh, there *might* be an
upstream gain if someone uses this someday").
> 3 is a good patch, perhaps worth posting it separately
> rather than keeping it hostage to the rest of the series? :)
If the series gets a definitive nack or needs a respin then I'll do that;
otherwise posting it separately just makes more work for everyone.
> Side note - it's not clear why open code the loop in every driver rather
> than having flow_stats_update() handle the looping?
Because (as I alluded in the patch description) we don't *want* people to do
the looping, we want them to implement per-action (rather than per-rule)
stats. Having such a loop in flow_stats_update() would encourage them to
think "there is a function to do this, other drivers call it, so it must be
what you're meant to do". Whereas if we make the Wrong Thing be a bit ugly,
hopefully developers will do the Right Thing instead.
(Perhaps I should add some comments into the drivers, instead, basically
saying "don't copy this, it's bogus"?)
IMHO the original (pre 5.1) tcf_exts_stats_update() never should have had a
loop in the first place, it was a fundamentally broken API wrt TC semantics.
-Ed
Powered by blists - more mailing lists