[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z8nUksjJyKEbP68-@LQ3V64L9R2>
Date: Thu, 6 Mar 2025 09:00:02 -0800
From: Joe Damato <jdamato@...tly.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: netdev@...r.kernel.org, mkarsten@...terloo.ca,
gerhard@...leder-embedded.com, jasowang@...hat.com,
xuanzhuo@...ux.alibaba.com, mst@...hat.com, leiyang@...hat.com,
Eugenio PĂ©rez <eperezma@...hat.com>,
Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
"open list:VIRTIO CORE AND NET DRIVERS" <virtualization@...ts.linux.dev>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net-next v5 3/4] virtio-net: Map NAPIs to queues
On Wed, Mar 05, 2025 at 06:21:18PM -0800, Jakub Kicinski wrote:
> On Wed, 5 Mar 2025 17:42:35 -0800 Joe Damato wrote:
> > Two spots that come to mind are:
> > - in virtnet_probe where all the other netdev ops are plumbed
> > through, or
> > - above virtnet_disable_queue_pair which I assume a future queue
> > API implementor would need to call for ndo_queue_stop
>
> I'd put it next to some call which will have to be inspected.
> Normally we change napi_disable() to napi_disable_locked()
> for drivers using the instance lock, so maybe on the napi_disable()
> line in the refill?
Sure, that seems reasonable to me.
Does this comment seem reasonable? I tried to distill what you said
in your previous message (thanks for the guidance, btw):
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index d6c8fe670005..fe5f6313d422 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2883,6 +2883,18 @@ static void refill_work(struct work_struct *work)
for (i = 0; i < vi->curr_queue_pairs; i++) {
struct receive_queue *rq = &vi->rq[i];
+ /*
+ * When queue API support is added in the future and the call
+ * below becomes napi_disable_locked, this driver will need to
+ * be refactored.
+ *
+ * One possible solution would be to:
+ * - cancel refill_work with cancel_delayed_work (note: non-sync)
+ * - cancel refill_work with cancel_delayed_work_sync in
+ * virtnet_remove after the netdev is unregistered
+ * - wrap all of the work in a lock (perhaps vi->refill_lock?)
+ * - check netif_running() and return early to avoid a race
+ */
Powered by blists - more mailing lists