[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131128140546.GA3925@htj.dyndns.org>
Date: Thu, 28 Nov 2013 09:05:46 -0500
From: Tejun Heo <htejun@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Oleg Nesterov <oleg@...hat.com>, zhang.yi20@....com.cn,
lkml <linux-kernel@...r.kernel.org>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Ingo Molnar <mingo@...hat.com>
Subject: Re: [PATCH]: exec: avoid propagating PF_NO_SETAFFINITY into
userspace child
Hello, Peter.
On Thu, Nov 28, 2013 at 02:41:24PM +0100, Peter Zijlstra wrote:
> On Thu, Nov 28, 2013 at 02:31:52PM +0100, Oleg Nesterov wrote:
> > I _guess_ usermodehelper_init() should use WQ_SYSFS then, and in this case
> > the user can write to wq_cpumask_store somewhere in /sys/.
>
> WTF is that and why are we creating alternative affinity interfaces when
> sched_setaffinity() is a prefectly fine one?
Hmmm? Because workqueue is a shared worker pool implementation. The
attributes are per-workqueue and if two workqueues have a single set
of attributes, they share the same execution resources. There's no
one-to-one relationship between a worker thread and a workqueue;
otherwise, we end up with a ton of tasks sitting around doing nothing,
so you can't individually do sched_setaffinity() on a task and expect
it to work.
The singlethread name just exists for compatibility and there's new
interface named alloc_ordered_workqueue() and it creates a workqueue
which execute work items one-by-one in issue order. For singlethread
/ ordered workqueues with rescuer, it could work to always execute
that work item on the rescuer as we're reserving an execution resource
anyway, but that'd unnecessarily increase cache footprint from
actively using more tasks than necessary and also increase code
complexity.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists