[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1363624883.25967.184.camel@gandalf.local.home>
Date: Mon, 18 Mar 2013 12:41:23 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Tejun Heo <tj@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
RT <linux-rt-users@...r.kernel.org>,
Clark Williams <clark@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: workqueue code needing preemption disabled
On Mon, 2013-03-18 at 09:27 -0700, Tejun Heo wrote:
> Does that mean that a task holding gcwq->lock may be preempted? If
> so, that sure could lead to weird problems. Maybe gcwq->lock should
> be marked non-preemptible somehow?
If the gcwq->lock is never held for a long time (really, more than a
microsecond on today's processors is considered a long time), and it
does not nest any other spin_locks (raw locks are OK, like the rq lock).
Then we could mark the gcwq->lock as raw as well.
This would require the struct global_cwq lock to have:
raw_spinlock_t lock;
and then you would need to do:
s/spin_/raw_spin/ for all gcwq->lock usages.
But, I'm worried about the loops that are done while holding this lock.
Just looking at is_chained_work() that does for_each_busy_worker(), how
big can that list be? If it's bound by # of CPUs then that may be fine,
but if it can be as big as the # of workers assigned, with no real
limit, then its not fine, because that creates an unbound (non
deterministic) latency.
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists