[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f9a69abd-dabc-440a-a3cd-c88b184f7e77@intel.com>
Date: Wed, 10 Dec 2025 19:42:53 -0800
From: "Tantilov, Emil S" <emil.s.tantilov@...el.com>
To: Larysa Zaremba <larysa.zaremba@...el.com>,
<intel-wired-lan@...ts.osuosl.org>, Tony Nguyen <anthony.l.nguyen@...el.com>
CC: <aleksander.lobakin@...el.com>, <sridhar.samudrala@...el.com>, "Singhai,
Anjali" <anjali.singhai@...el.com>, Michal Swiatkowski
<michal.swiatkowski@...ux.intel.com>, "Fijalkowski, Maciej"
<maciej.fijalkowski@...el.com>, Madhu Chittim <madhu.chittim@...el.com>,
"Josh Hay" <joshua.a.hay@...el.com>, "Keller, Jacob E"
<jacob.e.keller@...el.com>, <jayaprakash.shanmugam@...el.com>,
<natalia.wochtman@...el.com>, Jiri Pirko <jiri@...nulli.us>, "David S.
Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub
Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Simon Horman
<horms@...nel.org>, Jonathan Corbet <corbet@....net>, Richard Cochran
<richardcochran@...il.com>, Przemek Kitszel <przemyslaw.kitszel@...el.com>,
Andrew Lunn <andrew+netdev@...n.ch>, <netdev@...r.kernel.org>,
<linux-doc@...r.kernel.org>, <linux-kernel@...r.kernel.org>, Aleksandr
Loktionov <aleksandr.loktionov@...el.com>
Subject: Re: [PATCH iwl-next v5 09/15] idpf: refactor idpf to use libie
control queues
On 11/17/2025 5:48 AM, Larysa Zaremba wrote:
> From: Pavan Kumar Linga <pavan.kumar.linga@...el.com>
>
> Support to initialize and configure controlqs, and manage their
> transactions was introduced in libie. As part of it, most of the existing
> controlq structures are renamed and modified. Use those APIs in idpf and
> make all the necessary changes.
>
> Previously for the send and receive virtchnl messages, there used to be a
> memcpy involved in controlq code to copy the buffer info passed by the send
> function into the controlq specific buffers. There was no restriction to
> use automatic memory in that case. The new implementation in libie removed
> copying of the send buffer info and introduced DMA mapping of the send
> buffer itself. To accommodate it, use dynamic memory for the larger send
> buffers. For smaller ones (<= 128 bytes) libie still can copy them into the
> pre-allocated message memory.
>
> In case of receive, idpf receives a page pool buffer allocated by the libie
> and care should be taken to release it after use in the idpf.
>
> The changes are fairly trivial and localized, with a notable exception
> being the consolidation of idpf_vc_xn_shutdown and idpf_deinit_dflt_mbx
> under the latter name. This has some additional consequences that are
> addressed in the following patches.
>
> This refactoring introduces roughly additional 40KB of module storage used
> for systems that only run idpf, so idpf + libie_cp + libie_pci takes about
> 7% more storage than just idpf before refactoring.
>
> We now pre-allocate small TX buffers, so that does increase the memory
> usage, but reduces the need to allocate. This results in additional 256 *
> 128B of memory permanently used, increasing the worst-case memory usage by
> 32KB but our ctlq RX buffers need to be of size 4096B anyway (not changed
> by the patchset), so this is hardly noticeable.
>
> As for the timings, the fact that we are mostly limited by the HW response
> time which is far from instant, is not changed by this refactor.
>
> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@...el.com>
> Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@...el.com>
> Co-developed-by: Larysa Zaremba <larysa.zaremba@...el.com>
> Signed-off-by: Larysa Zaremba <larysa.zaremba@...el.com>
> ---
> drivers/net/ethernet/intel/idpf/Makefile | 2 -
> drivers/net/ethernet/intel/idpf/idpf.h | 28 +-
> .../net/ethernet/intel/idpf/idpf_controlq.c | 633 -------
> .../net/ethernet/intel/idpf/idpf_controlq.h | 142 --
> .../ethernet/intel/idpf/idpf_controlq_api.h | 177 --
> .../ethernet/intel/idpf/idpf_controlq_setup.c | 171 --
> drivers/net/ethernet/intel/idpf/idpf_dev.c | 60 +-
> .../net/ethernet/intel/idpf/idpf_ethtool.c | 20 +-
> drivers/net/ethernet/intel/idpf/idpf_lib.c | 67 +-
> drivers/net/ethernet/intel/idpf/idpf_main.c | 5 -
> drivers/net/ethernet/intel/idpf/idpf_mem.h | 20 -
> drivers/net/ethernet/intel/idpf/idpf_txrx.h | 2 +-
> drivers/net/ethernet/intel/idpf/idpf_vf_dev.c | 67 +-
> .../net/ethernet/intel/idpf/idpf_virtchnl.c | 1580 ++++++-----------
> .../net/ethernet/intel/idpf/idpf_virtchnl.h | 90 +-
> .../ethernet/intel/idpf/idpf_virtchnl_ptp.c | 239 ++-
> 16 files changed, 783 insertions(+), 2520 deletions(-)
> delete mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq.c
> delete mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq.h
> delete mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq_api.h
> delete mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq_setup.c
> delete mode 100644 drivers/net/ethernet/intel/idpf/idpf_mem.h
>
<snip>
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
> index e15b1e8effc8..7751a81fc29d 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
> +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
> @@ -1363,6 +1363,7 @@ void idpf_statistics_task(struct work_struct *work)
> */
> void idpf_mbx_task(struct work_struct *work)
> {
> + struct libie_ctlq_xn_recv_params xn_params;
> struct idpf_adapter *adapter;
>
> adapter = container_of(work, struct idpf_adapter, mbx_task.work);
> @@ -1373,7 +1374,14 @@ void idpf_mbx_task(struct work_struct *work)
> queue_delayed_work(adapter->mbx_wq, &adapter->mbx_task,
> usecs_to_jiffies(300));
>
> - idpf_recv_mb_msg(adapter, adapter->hw.arq);
> + xn_params = (struct libie_ctlq_xn_recv_params) {
> + .xnm = adapter->xn_init_params.xnm,
> + .ctlq = adapter->arq,
> + .ctlq_msg_handler = idpf_recv_event_msg,
> + .budget = LIBIE_CTLQ_MAX_XN_ENTRIES,
> + };
> +
> + libie_ctlq_xn_recv(&xn_params);
> }
>
> /**
> @@ -1907,7 +1915,6 @@ static void idpf_init_hard_reset(struct idpf_adapter *adapter)
> idpf_vc_core_deinit(adapter);
> if (!is_reset)
> reg_ops->trigger_reset(adapter, IDPF_HR_FUNC_RESET);
> - idpf_deinit_dflt_mbx(adapter);
> } else {
> dev_err(dev, "Unhandled hard reset cause\n");
> err = -EBADRQC;
> @@ -1972,19 +1979,11 @@ void idpf_vc_event_task(struct work_struct *work)
> if (test_bit(IDPF_REMOVE_IN_PROG, adapter->flags))
> return;
>
> - if (test_bit(IDPF_HR_FUNC_RESET, adapter->flags))
> - goto func_reset;
> -
> - if (test_bit(IDPF_HR_DRV_LOAD, adapter->flags))
> - goto drv_load;
> -
> - return;
> -
> -func_reset:
> - idpf_vc_xn_shutdown(adapter->vcxn_mngr);
This will cause a regression where VC can timeout on reset:
https://lore.kernel.org/intel-wired-lan/20250508184715.7631-1-emil.s.tantilov@intel.com/
I think we can keep this logic, remove the calls to vc_xn_shutdown in
idpf_vc_core_deinit() and add it to idpf_remove().
Thanks,
Emil
Powered by blists - more mailing lists