[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6af7f16f-2ce4-4584-a7dc-47116158d47a@intel.com>
Date: Wed, 20 Nov 2024 16:23:59 +0100
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>
CC: Jakub Kicinski <kuba@...nel.org>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
Toke Høiland-Jørgensen <toke@...hat.com>, "Alexei
Starovoitov" <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>, "John
Fastabend" <john.fastabend@...il.com>, Andrii Nakryiko <andrii@...nel.org>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>, Stanislav Fomichev
<sdf@...ichev.me>, Magnus Karlsson <magnus.karlsson@...el.com>,
<nex.sw.ncis.osdt.itp.upstreaming@...el.com>, <bpf@...r.kernel.org>,
<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net-next v5 00/19] xdp: a fistful of generic changes
(+libeth_xdp)
From: Willem De Bruijn <willemdebruijn.kernel@...il.com>
Date: Tue, 19 Nov 2024 10:14:28 -0500
> Alexander Lobakin wrote:
>> From: Willem De Bruijn <willemdebruijn.kernel@...il.com>
>> Date: Sat, 16 Nov 2024 10:31:08 -0500
[...]
>> libeth_xdp depends on every patch from the series. I don't know why you
>> believe this might anyhow move faster. Almost the whole series got
>> reviewed relatively quickly, except drivers/intel folder which people
>> often tend to avoid.
>
> Smaller focused series might have been merged already.
Half of this series merged wouldn't change that the whole set wouldn't
fit into one window (which is what you want). Half of this series merged
wouldn't allow sending idpf XDP parts.
>
>> I remind you that the initial libeth + iavf series (11 patches) was
>> baking on LKML for one year. Here 2 Chapters went into the kernel within
>> 2 windows and only this one (clearly much bigger than the previous ones
>> and containing only generic changes in contrary to the previous which
>> had only /intel code) didn't follow this rule, which doesn't
>> unnecessarily mean it will stuck for too long.
>>
>> (+ I clearly mentioned several times that Chapter III will take longer
>> than the rest and each time you had no issues with that)
>
> This is a misunderstanding. I need a working feature, on a predictable
> timeline, in distro kernels.
Predictable timeline is not about upstream. At least when it comes to
series which introduce a lot of generic changes / additions.
A good example is PFCP offload in ice, the initial support was done and
sent spring 2022, then it took almost 2 years until it landed into the
kernel. The first series was of good quality, but there'll always be
discussions, different opinions etc.
I've no idea what misunderstanding are you talking about, I quoted what
Oregon told me quoting you. The email I sent with per-patch breakdown
why none of them can be tossed off to upstream XDP for idpf, you seemed
to ignore, at least I haven't seen any reply. I've no idea what they
promise you each kernel release, but I haven't promised anything except
sending first working RFC by the end of 2023, which was done back then;
because promising that feature X will definitely land into upstream
release Y would mean lying. There's always risk even a small series can
easily miss 1-3 kernel releases.
Take a look at Amit's comment. It involves additional work which I
didn't expect. I'm planning to do it while the window is closed as the
suggestion is perfectly valid and I don't have any arguments against.
Feel free to go there and argue that the comment is not valid because
you want the series merged ASAP, if you think that this "argument" works
upstream.
>
>>>
>>> The first 3 patches are not essential to IDFP XDP + AF_XDP either.
>>
>> You don't seem to read the code. libeth_xdp won't even build without them.
>
> Not as written, no, obviously.
If you want to compare with the OOT implementation for the 10th time,
let me remind you that it differs from the upstream version of idpf a
ton. OOT driver still doesn't use Page Pool (without which idpf wouldn't
have been accepted upstream at all), for example, which automatically
drops the dependency from several big patches from this series. OOT
implementation performs X times worse than the upstream ice. It still
forces header split to be turned off when XDP prog is installed. It
still uses hardcoded Rx buffer sizes. I can continue enumerating things
from OOT unacceptable here in upstream forever.
>
>> I don't believe the model taken by some developers (not spelling names
>> loud) "let's submit minimal changes and almost draft code, I promise
>> I'll create a todo list and will be polishing it within next x years"
>> works at all, not speaking that it may work better than sending polished
>> mature code (I hope it is).
>>
>>> The IDPF feature does not have to not depend on them.
>>>
>>> Does not matter for upstream, but for the purpose of backporting this
>>> to distro kernels, it helps if the driver feature minimizes dependency
>>> on core kernel API changes. If patch 19 can be made to work without
>>
>> OOT style of thinking.
>> Minimizing core changes == artificial self-limiting optimization and
>> functionality potential.
>> New kernels > LTSes and especially custom kernels which receive
>> non-upstream (== not officially supported by the community) feature
>> backports. Upstream shouldn't sacrifice anything in favor of those, this
>> way we end up one day sacrificing stuff for out-of-tree drivers (which I
>> know some people already try to do).
>
> Opinionated positions. Nice if you have unlimited time.
I clearly remember Kuba's position that he wants good quality of
networking core and driver code. I'm pretty sure every netdev maintainer
has the same position. Again, feel free to argue with them, saying they
must take whatever trash is sent to LKML because customer X wants it
backported to his custom kernel Y ASAP.
>
>>> some of the changes in 1..18, that makes it more robust from that PoV.
>>
>> No it can't, I thought people first read the code and only then comment,
>> otherwise it's just wasting time.
Thanks,
Olek
Powered by blists - more mailing lists