[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aUAy5vJ04DWAVohe@soc-5CG4396X81.clients.intel.com>
Date: Mon, 15 Dec 2025 17:10:14 +0100
From: Larysa Zaremba <larysa.zaremba@...el.com>
To: "Tantilov, Emil S" <emil.s.tantilov@...el.com>
CC: <intel-wired-lan@...ts.osuosl.org>, Tony Nguyen
<anthony.l.nguyen@...el.com>, <aleksander.lobakin@...el.com>,
<sridhar.samudrala@...el.com>, "Singhai, Anjali" <anjali.singhai@...el.com>,
Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>, "Fijalkowski,
Maciej" <maciej.fijalkowski@...el.com>, Madhu Chittim
<madhu.chittim@...el.com>, Josh Hay <joshua.a.hay@...el.com>, "Keller, Jacob
E" <jacob.e.keller@...el.com>, <jayaprakash.shanmugam@...el.com>,
<natalia.wochtman@...el.com>, Jiri Pirko <jiri@...nulli.us>, "David S.
Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, "Jakub
Kicinski" <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Simon Horman
<horms@...nel.org>, Jonathan Corbet <corbet@....net>, Richard Cochran
<richardcochran@...il.com>, Przemek Kitszel <przemyslaw.kitszel@...el.com>,
Andrew Lunn <andrew+netdev@...n.ch>, <netdev@...r.kernel.org>,
<linux-doc@...r.kernel.org>, <linux-kernel@...r.kernel.org>, "Aleksandr
Loktionov" <aleksandr.loktionov@...el.com>
Subject: Re: [PATCH iwl-next v5 09/15] idpf: refactor idpf to use libie
control queues
On Wed, Dec 10, 2025 at 07:42:53PM -0800, Tantilov, Emil S wrote:
>
> On 11/17/2025 5:48 AM, Larysa Zaremba wrote:
> > From: Pavan Kumar Linga <pavan.kumar.linga@...el.com>
> >
> > Support to initialize and configure controlqs, and manage their
> > transactions was introduced in libie. As part of it, most of the existing
> > controlq structures are renamed and modified. Use those APIs in idpf and
> > make all the necessary changes.
> >
> > Previously for the send and receive virtchnl messages, there used to be a
> > memcpy involved in controlq code to copy the buffer info passed by the send
> > function into the controlq specific buffers. There was no restriction to
> > use automatic memory in that case. The new implementation in libie removed
> > copying of the send buffer info and introduced DMA mapping of the send
> > buffer itself. To accommodate it, use dynamic memory for the larger send
> > buffers. For smaller ones (<= 128 bytes) libie still can copy them into the
> > pre-allocated message memory.
> >
> > In case of receive, idpf receives a page pool buffer allocated by the libie
> > and care should be taken to release it after use in the idpf.
> >
> > The changes are fairly trivial and localized, with a notable exception
> > being the consolidation of idpf_vc_xn_shutdown and idpf_deinit_dflt_mbx
> > under the latter name. This has some additional consequences that are
> > addressed in the following patches.
> >
> > This refactoring introduces roughly additional 40KB of module storage used
> > for systems that only run idpf, so idpf + libie_cp + libie_pci takes about
> > 7% more storage than just idpf before refactoring.
> >
> > We now pre-allocate small TX buffers, so that does increase the memory
> > usage, but reduces the need to allocate. This results in additional 256 *
> > 128B of memory permanently used, increasing the worst-case memory usage by
> > 32KB but our ctlq RX buffers need to be of size 4096B anyway (not changed
> > by the patchset), so this is hardly noticeable.
> >
> > As for the timings, the fact that we are mostly limited by the HW response
> > time which is far from instant, is not changed by this refactor.
> >
> > Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@...el.com>
> > Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@...el.com>
> > Co-developed-by: Larysa Zaremba <larysa.zaremba@...el.com>
> > Signed-off-by: Larysa Zaremba <larysa.zaremba@...el.com>
> > ---
> > drivers/net/ethernet/intel/idpf/Makefile | 2 -
> > drivers/net/ethernet/intel/idpf/idpf.h | 28 +-
> > .../net/ethernet/intel/idpf/idpf_controlq.c | 633 -------
> > .../net/ethernet/intel/idpf/idpf_controlq.h | 142 --
> > .../ethernet/intel/idpf/idpf_controlq_api.h | 177 --
> > .../ethernet/intel/idpf/idpf_controlq_setup.c | 171 --
> > drivers/net/ethernet/intel/idpf/idpf_dev.c | 60 +-
> > .../net/ethernet/intel/idpf/idpf_ethtool.c | 20 +-
> > drivers/net/ethernet/intel/idpf/idpf_lib.c | 67 +-
> > drivers/net/ethernet/intel/idpf/idpf_main.c | 5 -
> > drivers/net/ethernet/intel/idpf/idpf_mem.h | 20 -
> > drivers/net/ethernet/intel/idpf/idpf_txrx.h | 2 +-
> > drivers/net/ethernet/intel/idpf/idpf_vf_dev.c | 67 +-
> > .../net/ethernet/intel/idpf/idpf_virtchnl.c | 1580 ++++++-----------
> > .../net/ethernet/intel/idpf/idpf_virtchnl.h | 90 +-
> > .../ethernet/intel/idpf/idpf_virtchnl_ptp.c | 239 ++-
> > 16 files changed, 783 insertions(+), 2520 deletions(-)
> > delete mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq.c
> > delete mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq.h
> > delete mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq_api.h
> > delete mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq_setup.c
> > delete mode 100644 drivers/net/ethernet/intel/idpf/idpf_mem.h
> >
>
> <snip>
>
> > diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
> > index e15b1e8effc8..7751a81fc29d 100644
> > --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
> > +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
> > @@ -1363,6 +1363,7 @@ void idpf_statistics_task(struct work_struct *work)
> > */
> > void idpf_mbx_task(struct work_struct *work)
> > {
> > + struct libie_ctlq_xn_recv_params xn_params;
> > struct idpf_adapter *adapter;
> > adapter = container_of(work, struct idpf_adapter, mbx_task.work);
> > @@ -1373,7 +1374,14 @@ void idpf_mbx_task(struct work_struct *work)
> > queue_delayed_work(adapter->mbx_wq, &adapter->mbx_task,
> > usecs_to_jiffies(300));
> > - idpf_recv_mb_msg(adapter, adapter->hw.arq);
> > + xn_params = (struct libie_ctlq_xn_recv_params) {
> > + .xnm = adapter->xn_init_params.xnm,
> > + .ctlq = adapter->arq,
> > + .ctlq_msg_handler = idpf_recv_event_msg,
> > + .budget = LIBIE_CTLQ_MAX_XN_ENTRIES,
> > + };
> > +
> > + libie_ctlq_xn_recv(&xn_params);
> > }
> > /**
> > @@ -1907,7 +1915,6 @@ static void idpf_init_hard_reset(struct idpf_adapter *adapter)
> > idpf_vc_core_deinit(adapter);
> > if (!is_reset)
> > reg_ops->trigger_reset(adapter, IDPF_HR_FUNC_RESET);
> > - idpf_deinit_dflt_mbx(adapter);
> > } else {
> > dev_err(dev, "Unhandled hard reset cause\n");
> > err = -EBADRQC;
> > @@ -1972,19 +1979,11 @@ void idpf_vc_event_task(struct work_struct *work)
> > if (test_bit(IDPF_REMOVE_IN_PROG, adapter->flags))
> > return;
> > - if (test_bit(IDPF_HR_FUNC_RESET, adapter->flags))
> > - goto func_reset;
> > -
> > - if (test_bit(IDPF_HR_DRV_LOAD, adapter->flags))
> > - goto drv_load;
> > -
> > - return;
> > -
> > -func_reset:
> > - idpf_vc_xn_shutdown(adapter->vcxn_mngr);
>
> This will cause a regression where VC can timeout on reset:
> https://lore.kernel.org/intel-wired-lan/20250508184715.7631-1-emil.s.tantilov@intel.com/
>
> I think we can keep this logic, remove the calls to vc_xn_shutdown in
> idpf_vc_core_deinit() and add it to idpf_remove().
>
> Thanks,
> Emil
>
Thank you for bringging this up!
It's a shame that the solution that we have agreed with you on previouly has
such unintended consequences. Well, after looking at it this way, I see no good
solution except bringing back xnm shutdown, but in libie. See the suggested diff
below. Please, say if it works for you.
When fixed up in a final patch idpf_vc_event_task will only have one changed
line:
@@ -1981,7 +1986,7 @@ void idpf_vc_event_task(struct work_struct *work)
return;
func_reset:
- idpf_vc_xn_shutdown(adapter->vcxn_mngr);
+ libie_ctlq_xn_shutdown(adapter->xnm);
drv_load:
set_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);
idpf_init_hard_reset(adapter);
libie_ctlq_xn_shutdown() sets the state to shutdown, so no new xns can be
taken (-EAGAIN) and running xns are prematurely completed
resulting in a timed out error. At the same it does not free any memory, so no
use-after-free risks.
---------------------
The main diff:
diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
index 69eb72ed6b99..dff931ebbd9f 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
@@ -1977,12 +1977,19 @@ void idpf_vc_event_task(struct work_struct *work)
if (test_bit(IDPF_REMOVE_IN_PROG, adapter->flags))
return;
+ if (test_bit(IDPF_HR_FUNC_RESET, adapter->flags))
+ goto func_reset;
- if (test_bit(IDPF_HR_FUNC_RESET, adapter->flags) ||
- test_bit(IDPF_HR_DRV_LOAD, adapter->flags)) {
- set_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);
- idpf_init_hard_reset(adapter);
- }
+ if (test_bit(IDPF_HR_DRV_LOAD, adapter->flags))
+ goto drv_load;
+
+ return;
+
+func_reset:
+ libie_ctlq_xn_shutdown(adapter->xn_init_params.xnm);
+drv_load:
+ set_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);
+ idpf_init_hard_reset(adapter);
}
/**
diff --git a/drivers/net/ethernet/intel/libie/controlq.c b/drivers/net/ethernet/intel/libie/controlq.c
index 9b24a87872e5..a39ee6ea37f1 100644
--- a/drivers/net/ethernet/intel/libie/controlq.c
+++ b/drivers/net/ethernet/intel/libie/controlq.c
@@ -694,7 +694,7 @@ static void libie_ctlq_xn_put(struct libie_ctlq_xn_manager *xnm,
*/
static void libie_ctlq_xn_deinit_dma(struct device *dev,
struct libie_ctlq_xn_manager *xnm,
- u32 num_entries)
+ u32 num_entries)
{
for (u32 i = 0; i < num_entries; i++) {
struct libie_ctlq_xn *xn = &xnm->ring[i];
@@ -1093,14 +1093,12 @@ u32 libie_ctlq_xn_send_clean(const struct libie_ctlq_xn_clean_params *params)
EXPORT_SYMBOL_NS_GPL(libie_ctlq_xn_send_clean, "LIBIE_CP");
/**
- * libie_ctlq_xn_deinit - deallocate and free the transaction manager resources
+ * libie_ctlq_xn_shutdown - terminate control queue transactions
* @xnm: pointer to the transaction manager
- * @ctx: controlq context structure
*
- * All Rx processing must be stopped beforehand.
+ * Synchronously terminate existing transactions and stop accepting new ones.
*/
-void libie_ctlq_xn_deinit(struct libie_ctlq_xn_manager *xnm,
- struct libie_ctlq_ctx *ctx)
+void libie_ctlq_xn_shutdown(struct libie_ctlq_xn_manager *xnm)
{
bool must_wait = false;
u32 i;
@@ -1129,7 +1127,20 @@ void libie_ctlq_xn_deinit(struct libie_ctlq_xn_manager *xnm,
if (must_wait)
wait_for_completion(&xnm->can_destroy);
+}
+EXPORT_SYMBOL_NS_GPL(libie_ctlq_xn_shutdown, "LIBIE_CP");
+/**
+ * libie_ctlq_xn_deinit - deallocate and free the transaction manager resources
+ * @xnm: pointer to the transaction manager
+ * @ctx: controlq context structure
+ *
+ * All Rx processing must be stopped beforehand.
+ */
+void libie_ctlq_xn_deinit(struct libie_ctlq_xn_manager *xnm,
+ struct libie_ctlq_ctx *ctx)
+{
+ libie_ctlq_xn_shutdown(xnm);
libie_ctlq_xn_deinit_dma(&ctx->mmio_info.pdev->dev, xnm,
LIBIE_CTLQ_MAX_XN_ENTRIES);
kfree(xnm);
diff --git a/include/linux/intel/libie/controlq.h b/include/linux/intel/libie/controlq.h
index 4a627670814e..98f082b5d039 100644
--- a/include/linux/intel/libie/controlq.h
+++ b/include/linux/intel/libie/controlq.h
@@ -414,6 +414,7 @@ struct libie_ctlq_xn_init_params {
int libie_ctlq_xn_init(struct libie_ctlq_xn_init_params *params);
void libie_ctlq_xn_deinit(struct libie_ctlq_xn_manager *xnm,
struct libie_ctlq_ctx *ctx);
+void libie_ctlq_xn_shutdown(struct libie_ctlq_xn_manager *xnm);
int libie_ctlq_xn_send(struct libie_ctlq_xn_send_params *params);
u32 libie_ctlq_xn_recv(struct libie_ctlq_xn_recv_params *params);
u32 libie_ctlq_xn_send_clean(const struct libie_ctlq_xn_clean_params *params);
Powered by blists - more mailing lists