[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7348f2c9f594dd494732c481c0e35638ae064988.camel@redhat.com>
Date: Mon, 01 Jul 2024 16:23:56 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Allen <allen.lkml@...il.com>
Cc: kuba@...nel.org, Guo-Fu Tseng <cooldavid@...ldavid.org>, "David S.
Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
jes@...ined-monkey.org, kda@...ux-powerpc.org, cai.huoqing@...ux.dev,
dougmill@...ux.ibm.com, npiggin@...il.com, christophe.leroy@...roup.eu,
aneesh.kumar@...nel.org, naveen.n.rao@...ux.ibm.com, nnac123@...ux.ibm.com,
tlfalcon@...ux.ibm.com, marcin.s.wojtas@...il.com, mlindner@...vell.com,
stephen@...workplumber.org, nbd@....name, sean.wang@...iatek.com,
Mark-MC.Lee@...iatek.com, lorenzo@...nel.org, matthias.bgg@...il.com,
angelogioacchino.delregno@...labora.com, borisp@...dia.com,
bryan.whitehead@...rochip.com, UNGLinuxDriver@...rochip.com,
louis.peens@...igine.com, richardcochran@...il.com,
linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-acenic@...site.dk, linux-net-drivers@....com, netdev@...r.kernel.org
Subject: Re: [PATCH 13/15] net: jme: Convert tasklet API to new bottom half
workqueue mechanism
On Mon, 2024-07-01 at 03:13 -0700, Allen wrote:
> > > @@ -1326,22 +1326,22 @@ static void jme_link_change_work(struct work_struct *work)
> > > jme_start_shutdown_timer(jme);
> > > }
> > >
> > > - goto out_enable_tasklet;
> > > + goto out_enable_bh_work;
> > >
> > > err_out_free_rx_resources:
> > > jme_free_rx_resources(jme);
> > > -out_enable_tasklet:
> > > - tasklet_enable(&jme->txclean_task);
> > > - tasklet_enable(&jme->rxclean_task);
> > > - tasklet_enable(&jme->rxempty_task);
> > > +out_enable_bh_work:
> > > + enable_and_queue_work(system_bh_wq, &jme->txclean_bh_work);
> > > + enable_and_queue_work(system_bh_wq, &jme->rxclean_bh_work);
> > > + enable_and_queue_work(system_bh_wq, &jme->rxempty_bh_work);
> >
> > This will unconditionally schedule the rxempty_bh_work and is AFAICS a
> > different behavior WRT prior this patch.
> >
> > In turn the rxempty_bh_work() will emit (almost unconditionally) the
> > 'RX Queue Full!' message, so the change should be visibile to the user.
> >
> > I think you should queue the work only if it was queued at cancel time.
> > You likely need additional status to do that.
> >
>
> Thank you for taking the time out to review. Now that it's been a week, I was
> preparing to send out version 3. Before I do that, I want to make sure if this
> the below approach is acceptable.
I _think_ the following does not track the rxempty_bh_work 'queued'
status fully/correctly.
> @@ -1282,9 +1282,9 @@ static void jme_link_change_work(struct work_struct *work)
> jme_stop_shutdown_timer(jme);
>
> jme_stop_pcc_timer(jme);
> - tasklet_disable(&jme->txclean_task);
> - tasklet_disable(&jme->rxclean_task);
> - tasklet_disable(&jme->rxempty_task);
> + disable_work_sync(&jme->txclean_bh_work);
> + disable_work_sync(&jme->rxclean_bh_work);
> + disable_work_sync(&jme->rxempty_bh_work);
I think the above should be:
jme->rxempty_bh_work_queued = disable_work_sync(&jme->rxempty_bh_work);
[...]
> @@ -1326,22 +1326,23 @@ static void jme_link_change_work(struct
> work_struct *work)
> jme_start_shutdown_timer(jme);
> }
>
> - goto out_enable_tasklet;
> + goto out_enable_bh_work;
>
> err_out_free_rx_resources:
> jme_free_rx_resources(jme);
> -out_enable_tasklet:
> - tasklet_enable(&jme->txclean_task);
> - tasklet_enable(&jme->rxclean_task);
> - tasklet_enable(&jme->rxempty_task);
> +out_enable_bh_work:
> + enable_and_queue_work(system_bh_wq, &jme->txclean_bh_work);
> + enable_and_queue_work(system_bh_wq, &jme->rxclean_bh_work);
> + if (jme->rxempty_bh_work_queued)
> + enable_and_queue_work(system_bh_wq, &jme->rxempty_bh_work);
Missing:
else
enable_work(system_bh_wq, &jme->rxempty_bh_work);
[...]
> @@ -3180,9 +3182,9 @@ jme_suspend(struct device *dev)
> netif_stop_queue(netdev);
> jme_stop_irq(jme);
>
> - tasklet_disable(&jme->txclean_task);
> - tasklet_disable(&jme->rxclean_task);
> - tasklet_disable(&jme->rxempty_task);
> + disable_work_sync(&jme->txclean_bh_work);
> + disable_work_sync(&jme->rxclean_bh_work);
> + disable_work_sync(&jme->rxempty_bh_work);
should be:
jme->rxempty_bh_work_queued = disable_work_sync(&jme->rxempty_bh_work);
>
> @@ -3198,9 +3200,10 @@ jme_suspend(struct device *dev)
> jme->phylink = 0;
> }
>
> - tasklet_enable(&jme->txclean_task);
> - tasklet_enable(&jme->rxclean_task);
> - tasklet_enable(&jme->rxempty_task);
> + enable_and_queue_work(system_bh_wq, &jme->txclean_bh_work);
> + enable_and_queue_work(system_bh_wq, &jme->rxclean_bh_work);
> + jme->rxempty_bh_work_queued = true;
> + enable_and_queue_work(system_bh_wq, &jme->rxempty_bh_work);
should be:
if (jme->rxempty_bh_work_queued)
enable_and_queue_work(system_bh_wq, &jme->rxempty_bh_work);
else
enable_work(system_bh_wq, &jme->rxempty_bh_work);
I think the above ones are the only places where you need to touch
'rxempty_bh_work_queued'.
[...]
> Do we need a flag for rxclean and txclean too?
Functionally speaking I don't think it will be necessary, as
rxclean_bh_work() and txclean_bh_work() don't emit warnings on spurious
invocation.
Thanks,
Paolo
Powered by blists - more mailing lists