[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160624070515.GU30154@twins.programming.kicks-ass.net>
Date: Fri, 24 Jun 2016 09:05:15 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Tejun Heo <tj@...nel.org>
Cc: Petr Mladek <pmladek@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Josh Triplett <josh@...htriplett.org>,
Thomas Gleixner <tglx@...utronix.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Jiri Kosina <jkosina@...e.cz>, Borislav Petkov <bp@...e.de>,
Michal Hocko <mhocko@...e.cz>, linux-mm@...ck.org,
Vlastimil Babka <vbabka@...e.cz>, linux-api@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v9 06/12] kthread: Add kthread_drain_worker()
On Thu, Jun 23, 2016 at 05:32:58PM -0400, Tejun Heo wrote:
> Hello,
>
> On Wed, Jun 22, 2016 at 10:54:45PM +0200, Peter Zijlstra wrote:
> > > + * The caller is responsible for blocking all users of this kthread
> > > + * worker from queuing new works. Also it is responsible for blocking
> > > + * the already queued works from an infinite re-queuing!
> >
> > This, I really dislike that. And it makes the kthread_destroy_worker()
> > from the next patch unnecessarily fragile.
> >
> > Why not add a kthread_worker::blocked flag somewhere and refuse/WARN
> > kthread_queue_work() when that is set.
>
> It's the same logic from workqueue counterpart.
So ? Clearly it (the kthread workqueue) can be improved here.
> For workqueue, nothing can make it less fragile as the workqueue
> struct itself is freed on destruction. If its users fail to stop
> issuing work items, it'll lead to use-after-free.
Right, but this kthread thingy does not, so why not add a failsafe?
> IIRC, the draining of self-requeueing work items is a specific
> requirement from some edge use case which used workqueue to implement
> multi-step state machine.
Right, that might be an issue,
> Given how rare that is
Could you then not remove/rework these few cases for workqueue as well
and make that 'better' too?
> and the extra
> complexity of identifying self-requeueing cases, let's forget about
> draining and on destruction clear the worker pointer to block further
> queueing and then flush whatever is in flight.
You're talking about regular workqueues here?
Powered by blists - more mailing lists