[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aN6MIX54e49yALeO@ada.csh.rit.edu>
Date: Thu, 2 Oct 2025 10:28:49 -0400
From: Mary Strodl <mstrodl@....rit.edu>
To: Tzung-Bi Shih <tzungbi@...nel.org>
Cc: linux-kernel@...r.kernel.org, linus.walleij@...aro.org, brgl@...ev.pl,
linux-gpio@...r.kernel.org
Subject: Re: [PATCH v2 1/3] gpio: mpsse: use rcu to ensure worker is torn down
Hello!
On Thu, Oct 02, 2025 at 10:02:21PM +0800, Tzung-Bi Shih wrote:
> The change looks irrelevant to the patch.
I can put this in another patch in the series. I wasn't sure.
> I'm not sure: doesn't it need to use list_for_each_entry_safe() (or variants)
> as elements may be removed in the loop?
Absolutely! I noticed this too. Fix coming next revision :)
>
> > + /* Don't stop ourselves */
> > + if (worker == my_worker)
> > + continue;
> > +
> > + scoped_guard(raw_spinlock_irqsave, &priv->irq_spin)
> > + list_del_rcu(&worker->list);
>
> If RCU is using, does it still need to acquire the spinlock?
I believe so, yes. My understanding is RCU lists are safe against unprotected
reads, but you still need to protect list ops like add/remove:
https://www.kernel.org/doc/html/latest/RCU/listRCU.html#example-1-read-mostly-list-deferred-destruction
> Alternatively, could it use the spinlock to protect the list so that it doesn't
> need RCU at all?
Yes! That's what my next version will do.
> I'm not sure: however it seems this function may be in IRQ context too (as
> gpio_mpsse_irq_disable() does). GFP_KERNEL can sleep.
I worried about the same, but didn't actually follow up on it because I never
ran into it. My bad. I will make this GFP_NOWAIT in the next revision.
> > + scoped_guard(raw_spinlock_irqsave, &priv->irq_spin)
> > + list_add_rcu(&worker->list, &priv->workers);
>
> Doesn't it need a synchronize_rcu()?
My understanding was that synchronize_rcu was a grace period delay, so you
could be certain any readers after this point would get the new data.
In this case, we don't care what the readers get. Now that I'm thinking
about it though, maybe the irq loop should call synchronize_rcu first?
In any case though, this will be going away in my next version.
> > static void gpio_mpsse_disconnect(struct usb_interface *intf)
> > {
> > + struct mpsse_worker *worker;
> > struct mpsse_priv *priv = usb_get_intfdata(intf);
> > + struct list_head destructors = LIST_HEAD_INIT(destructors);
> > +
> > + /*
> > + * Lock prevents double-free of worker from here and the teardown
> > + * step at the beginning of gpio_mpsse_poll
> > + */
> > + scoped_guard(mutex, &priv->irq_race) {
> > + scoped_guard(rcu) {
> > + list_for_each_entry_rcu(worker, &priv->workers, list) {
> > + scoped_guard(raw_spinlock_irqsave, &priv->irq_spin)
> > + list_del_rcu(&worker->list);
> > +
> > + /* Give worker a chance to terminate itself */
> > + atomic_set(&worker->cancelled, 1);
> > + /* Keep track of stuff to cancel */
> > + INIT_LIST_HEAD(&worker->destroy);
> > + list_add(&worker->destroy, &destructors);
> > + }
> > + }
> > + /* Make sure list consumers are finished before we tear down */
> > + synchronize_rcu();
> > + list_for_each_entry(worker, &destructors, destroy)
> > + gpio_mpsse_stop(worker);
> > + }
>
> The code block is very similar to block in gpio_mpsse_poll() above. Could
> consider to use a function to prevent duplicate code.
Yeah I agree. I didn't really see a satisfying way to do it with the difference
in scoped_guard vs scoped_cond_guard, though. Now that I'm thinking about it
again though, I could just take everything inside the mutex guard and put
that into a function.
Thanks a lot for taking a look! It's hard doing a critical reading of your own
code, especially for concurrency/memory safety things :)
Powered by blists - more mailing lists