[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0c44265eae421eff49e19be3ebfe20d1fb5e6f9a.camel@sipsolutions.net>
Date: Wed, 10 May 2023 21:16:09 +0200
From: Johannes Berg <johannes@...solutions.net>
To: Tejun Heo <tj@...nel.org>
Cc: linux-kernel@...r.kernel.org, linux-wireless@...r.kernel.org,
Lai Jiangshan <jiangshanlai@...il.com>
Subject: Re: [RFC PATCH 2/4] workqueue: support holding a mutex for each work
On Wed, 2023-05-10 at 08:34 -1000, Tejun Heo wrote:
> On Wed, May 10, 2023 at 06:04:26PM +0200, Johannes Berg wrote:
> > @@ -2387,7 +2389,13 @@ __acquires(&pool->lock)
> > */
> > lockdep_invariant_state(true);
> > trace_workqueue_execute_start(work);
> > - worker->current_func(work);
> > + if (unlikely(pwq->wq->work_mutex)) {
> > + mutex_lock(pwq->wq->work_mutex);
> > + worker->current_func(work);
> > + mutex_unlock(pwq->wq->work_mutex);
> > + } else {
> > + worker->current_func(work);
> > + }
>
> Ah, I don't know about this. This can't be that difficult to do from the
> callee side, right?
>
Yeah I thought you'd say that :)
It isn't difficult, the issue is just that in the case I'm envisioning,
you can't just call wiphy_lock() since that would attempt to pause the
workqueue, which can't work from on the workqueue itself. So you need
wiphy_lock_from_work()/wiphy_unlock_from_work() or remember to use the
mutex directly there, which all seemed more error-prone and harder to
maintain.
But anyway I could easily implement _both_ of these in cfg80211
directly, with just a linked list of works and a single struct
work_struct to execute things on the list, with the right locking. That
might be easier overall, just at the expense of more churn while
converting, but that's not even necessarily _bad_, it would really
guarantee that we can tell immediately the work is properly done...
I'll play with that idea some, I guess. Would you still want the
pause/resume patch anyway, even if I end up not using it then?
johannes
Powered by blists - more mailing lists