[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZFvwdro1T4lnLDrs@slm.duckdns.org>
Date: Wed, 10 May 2023 09:28:54 -1000
From: Tejun Heo <tj@...nel.org>
To: Johannes Berg <johannes@...solutions.net>
Cc: linux-kernel@...r.kernel.org, linux-wireless@...r.kernel.org,
Lai Jiangshan <jiangshanlai@...il.com>
Subject: Re: [RFC PATCH 2/4] workqueue: support holding a mutex for each work
Hello,
On Wed, May 10, 2023 at 09:16:09PM +0200, Johannes Berg wrote:
> Yeah I thought you'd say that :)
Sorry about being so predictable. :)
> It isn't difficult, the issue is just that in the case I'm envisioning,
> you can't just call wiphy_lock() since that would attempt to pause the
> workqueue, which can't work from on the workqueue itself. So you need
> wiphy_lock_from_work()/wiphy_unlock_from_work() or remember to use the
> mutex directly there, which all seemed more error-prone and harder to
> maintain.
>
> But anyway I could easily implement _both_ of these in cfg80211
> directly, with just a linked list of works and a single struct
> work_struct to execute things on the list, with the right locking. That
> might be easier overall, just at the expense of more churn while
> converting, but that's not even necessarily _bad_, it would really
> guarantee that we can tell immediately the work is properly done...
>
> I'll play with that idea some, I guess. Would you still want the
> pause/resume patch anyway, even if I end up not using it then?
I think it's something inherently useful (along with the ability to do the
same thing to a work time - ie. cancel and inhibit a work item to be
queued0); however, it's probably not a good idea to merge without an in-tree
user. Would you mind posting a fixed patch nonetheless for future reference
if it's not too much hassle?
Thanks.
--
tejun
Powered by blists - more mailing lists