[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <70458c52-75ef-4876-a4a3-c042c52ecdb3@intel.com>
Date: Wed, 12 Jun 2024 13:51:17 +0200
From: Przemek Kitszel <przemyslaw.kitszel@...el.com>
To: Jacob Keller <jacob.e.keller@...el.com>, Alexander Lobakin
<aleksander.lobakin@...el.com>, Mateusz Polchlopek
<mateusz.polchlopek@...el.com>, "Nguyen, Anthony L"
<anthony.l.nguyen@...el.com>
CC: <intel-wired-lan@...ts.osuosl.org>, <netdev@...r.kernel.org>, "Wojciech
Drewek" <wojciech.drewek@...el.com>
Subject: Re: [Intel-wired-lan] [PATCH iwl-next v7 09/12] iavf: refactor
iavf_clean_rx_irq to support legacy and flex descriptors
On 6/11/24 22:52, Jacob Keller wrote:
>
>
> On 6/11/2024 4:47 AM, Alexander Lobakin wrote:
>> From: Mateusz Polchlopek <mateusz.polchlopek@...el.com>
>> Date: Tue, 4 Jun 2024 09:13:57 -0400
>>
>>> From: Jacob Keller <jacob.e.keller@...el.com>
[..]
>> Thanks,
>> Olek
>
> Thanks for the detailed review. This is rather tricky to get right. The
> goal is to be able to support both the legacy descriptors for old PFs
> and the new flex descriptors to support new features like timestamping,
> while avoiding having a lot of near duplicate logic.
>
> I guess you could achieve some of that via macros or some other
> construction that expands the code out better for compile time optimization?
>
> I don't want to end up with just duplicating the entire hot path in
> code.. but I also don't want to end up with a "to avoid that we just
> check the same values again and again".
>
> The goal is to make sure its maintainable and avoid the case where we
> introduce or fix bugs in one flow without fixing it in the others.. But
> the current approach here is obviously not the most optimal way to
> achieve these goals.
>
Thank you Olek for providing the feedback, especially such insightful!
@Tony, I would like to have this patch kept in the for-VAL bucket, if
only to double check if applying the feedback have not accidentaly broke
the correctness. Additional points if double testing will illustrate
performance improvements :)
Powered by blists - more mailing lists