[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <32f03225-636d-4677-b5f6-2dde8ab76b6d@molgen.mpg.de>
Date: Fri, 25 Apr 2025 12:30:37 +0200
From: Paul Menzel <pmenzel@...gen.mpg.de>
To: Larysa Zaremba <larysa.zaremba@...el.com>
Cc: intel-wired-lan@...ts.osuosl.org, Tony Nguyen
<anthony.l.nguyen@...el.com>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>,
Jonathan Corbet <corbet@....net>,
Przemek Kitszel <przemyslaw.kitszel@...el.com>, Jiri Pirko
<jiri@...nulli.us>, Tatyana Nikolova <tatyana.e.nikolova@...el.com>,
Andrew Lunn <andrew+netdev@...n.ch>,
Alexander Lobakin <aleksander.lobakin@...el.com>,
Michael Ellerman <mpe@...erman.id.au>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>, Lee Trager
<lee@...ger.us>, Madhavan Srinivasan <maddy@...ux.ibm.com>,
Sridhar Samudrala <sridhar.samudrala@...el.com>,
Jacob Keller <jacob.e.keller@...el.com>,
Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>,
Mateusz Polchlopek <mateusz.polchlopek@...el.com>,
Ahmed Zaki <ahmed.zaki@...el.com>, netdev@...r.kernel.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
"Karlsson, Magnus" <magnus.karlsson@...el.com>,
Emil Tantilov <emil.s.tantilov@...el.com>,
Madhu Chittim <madhu.chittim@...el.com>, Josh Hay <joshua.a.hay@...el.com>,
Milena Olech <milena.olech@...el.com>, pavan.kumar.linga@...el.com,
"Singhai, Anjali" <anjali.singhai@...el.com>,
Michal Kubiak <michal.kubiak@...el.com>
Subject: Re: [Intel-wired-lan] [PATCH iwl-next v2 08/14] idpf: refactor idpf
to use libie controlq and Xn APIs
Dear Larysa,
Am 25.04.25 um 12:11 schrieb Larysa Zaremba:
> On Thu, Apr 24, 2025 at 05:32:17PM +0200, Paul Menzel wrote:
>> Am 24.04.25 um 13:32 schrieb Larysa Zaremba:
>>> From: Pavan Kumar Linga <pavan.kumar.linga@...el.com>
>>>
>>> Support to initialize and configure controlq, Xn manager,
>>> MMIO and reset APIs was introduced in libie. As part of it,
>>> most of the existing controlq structures are renamed and
>>> modified. Use those APIs in idpf and make all the necessary changes.
>>>
>>> Previously for the send and receive virtchnl messages, there
>>> used to be a memcpy involved in controlq code to copy the buffer
>>> info passed by the send function into the controlq specific
>>> buffers. There was no restriction to use automatic memory
>>> in that case. The new implementation in libie removed copying
>>> of the send buffer info and introduced DMA mapping of the
>>> send buffer itself. To accommodate it, use dynamic memory for
>>> the send buffers. In case of receive, idpf receives a page pool
>>> buffer allocated by the libie and care should be taken to
>>> release it after use in the idpf.
>>>
>>> The changes are fairly trivial and localized, with a notable exception
>>> being the consolidation of idpf_vc_xn_shutdown and idpf_deinit_dflt_mbx
>>> under the latter name. This has some additional consequences that are
>>> addressed in the following patches.
>>
>> (You could reflow the text above to have consistent line length.)
Re-flowing would save at least one line.
>> Also, how can your patchset be verified?
>
> Just normal regression testing with kernel debug options enabled, a large
> portion of the touched code is covered by just loading-unloading the driver and
> doing a PCI reset, stuff like PTP needs to be checked separately, because it
> heavily uses control queue itself.
>
>> Does the module size change?
>
> idpf size does decrease, but overall size increases. It was 585728B for idpf,
> now it is 557056 + 16384 + 53248 [B], this amounts to +40KB of storage usage on
> systems that will not use ixd.
>
> After
> *********
> idpf 557056 0
> ixd 40960 0
> libie_pci 16384 2 ixd,idpf
> libie_cp 53248 2 ixd,idpf
> libeth 16384 2 idpf,libie_cp
>
> Before
> *********
> idpf 585728 0
> libeth 16384 1 idpf
>
>> Is the resource usage different for certain test cases?
>
> We now pre-allocate small TX buffers, so that does increase the memory usage,
> but reduces the need to allocate. This results in additional 256 * 128B of
> memory permanently used, but our ctlq RX buffers need to be of size 4096B anyway
> (not changed by the patchset), so this is hardly noticable.
>
> The worst-case memory usage should stay almost the same + abovementioned 32KB.
> As for the timings, we are mostly limited by the HW response time, which is far
> from instant.
Thank you for all these details. I’d love to see those in the commit
message too.
>>> Reviewed-by: Michal Kubiak <michal.kubiak@...el.com>
>>> Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@...el.com>
>>> Co-developed-by: Larysa Zaremba <larysa.zaremba@...el.com>
>>> Signed-off-by: Larysa Zaremba <larysa.zaremba@...el.com>
>>> ---
>>> drivers/net/ethernet/intel/idpf/Kconfig | 1 +
>>> drivers/net/ethernet/intel/idpf/Makefile | 2 -
>>> drivers/net/ethernet/intel/idpf/idpf.h | 42 +-
>>> .../net/ethernet/intel/idpf/idpf_controlq.c | 624 -------
>>> .../net/ethernet/intel/idpf/idpf_controlq.h | 130 --
>>> .../ethernet/intel/idpf/idpf_controlq_api.h | 177 --
>>> .../ethernet/intel/idpf/idpf_controlq_setup.c | 171 --
>>> drivers/net/ethernet/intel/idpf/idpf_dev.c | 91 +-
>>> drivers/net/ethernet/intel/idpf/idpf_lib.c | 49 +-
>>> drivers/net/ethernet/intel/idpf/idpf_main.c | 87 +-
>>> drivers/net/ethernet/intel/idpf/idpf_mem.h | 20 -
>>> drivers/net/ethernet/intel/idpf/idpf_txrx.h | 2 +-
>>> drivers/net/ethernet/intel/idpf/idpf_vf_dev.c | 89 +-
>>> .../net/ethernet/intel/idpf/idpf_virtchnl.c | 1622 ++++++-----------
>>> .../net/ethernet/intel/idpf/idpf_virtchnl.h | 89 +-
>>> .../ethernet/intel/idpf/idpf_virtchnl_ptp.c | 303 ++-
>>> 16 files changed, 886 insertions(+), 2613 deletions(-)
>>> delete mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq.c
>>> delete mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq.h
>>> delete mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq_api.h
>>> delete mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq_setup.c
>>> delete mode 100644 drivers/net/ethernet/intel/idpf/idpf_mem.h
>>
>> […]
Kind regards,
Paul
Powered by blists - more mailing lists