[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fa23dd13-81e4-4454-9d00-c9f64842e59c@gmail.com>
Date: Mon, 8 Jan 2024 03:42:00 +0200
From: Sergey Ryazanov <ryazanov.s.a@...il.com>
To: Antonio Quartulli <antonio@...nvpn.net>
Cc: netdev@...r.kernel.org, Jakub Kicinski <kuba@...nel.org>
Subject: Re: [PATCH net-next 0/1] Introducing OpenVPN Data Channel Offload
Hi Antonio,
On 08.01.2024 01:32, Antonio Quartulli wrote:
> Hi Sergey,
>
> Thanks for jumping in
>
> On 06/01/2024 23:29, Sergey Ryazanov wrote:
>> Hi Antonio,
>>
>> On 06.01.2024 23:57, Antonio Quartulli wrote:
>>> I tend to agree that a unique large patch is harder to review, but
>>> splitting the code into several paches proved to be quite cumbersome,
>>> therefore I prefered to not do it. I believe the code can still be
>>> reviewed file by file, despite in the same patch.
>>
>> I am happy to know that project is ongoing. But I had stopped the
>> review after reading these lines. You need AI to review at once "35
>> files changed, 5914 insertions(+)". Last time I checked, I was human.
>> Sorry.
>>
>> Or you can see it like this: if submitter does not care, then why
>> anyone else should?
>
> I am sorry - I did not mean to be careless/sloppy.
Sure. I would not say this, I just gave you a hint how it can be look
like. Looks like you did a great job implementing it, lets do a final
good review before merging it.
> I totally understand, but I truly burnt so much time on finding a
> reasonable way to split this patch that I had to give up at some point.
Yep, but if submitter did not properly split it, then a reviewer have to
do this. Right?
> I get your input, but do you think that turning it into 35 patches of 1
> file each (just as a random example), will make it easier to digest?
It is a pleasure to review a good arranged series that in step-by-step
manner explains how a functional should be implemented and will work.
Splitting some huge driver into item files is neither a good approach.
As you already mentioned, it gives almost nothing in terms of the review
simplification. E.g. some file can contain a set of library functions,
so checking them without calling examples is nonsense, etc.
Thus, I would suggest to choose a regular approach and turn the code
into implementation steps. Just imagine a set of steps you would take to
implement the functionality if you were do it from scratch. You can also
think of it as a presentation.
Just an example of how I would break such work into separate patches:
1. changes to common kernel components: one patch per changed subsystem
(e.g. netlink, socket helpers, whatever);
2. skeleton of the future module: initialization, deinitialization,
registration in the necessary subsystems (genetlink), most of the
required callbacks are just stubs;
3. basic tunnel management implementation: tunnel creation and
destroying, network interface creation, etc, again minimalistic, no
actual packet processing, just drop packets at this stage;
4. tx path implementation: bare minimal data processing;
5. rx path implementation: again bare minimal;
6. various service functions: keepalive, rekeying, peer source address
updating, etc. better in one-patch-per-feature manner.
This was just a clue. You, as the author, for sure know better how to
present this in the most understandable way.
Just a couple of general tips. Common changes to other parts of the
kernel (e.g. mentioned NLA_POLICY_MAX_LEN) should come as a dedicated
patch with justification for how it makes life easier. Exception here is
identifiers that the code is going to use (e.g. UDP_ENCAP_OVPNINUDP), it
is better to add them together with the implementation. Each item patch
must not break the kernel build. If you use any macro or call any
routine, they should be added in the same patch or before. Build bot
will be strongly against patches that break something. As you have
already seen, it is even intolerant of new compilation warnings :)
> Anyway, I will give it another try (the test robot complained about
> something, so it seems I need to resend the patch anyway) and I'll see
> where I land.
>
> Cheers!
Cheers!
--
Sergey
Powered by blists - more mailing lists