[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131205162658.GB20828@localhost.localdomain>
Date: Thu, 5 Dec 2013 17:26:59 +0100
From: Frederic Weisbecker <fweisbec@...il.com>
To: Tejun Heo <htejun@...il.com>
Cc: Oleg Nesterov <oleg@...hat.com>, Christoph Lameter <cl@...ux.com>,
Peter Zijlstra <peterz@...radead.org>, zhang.yi20@....com.cn,
lkml <linux-kernel@...r.kernel.org>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Ingo Molnar <mingo@...hat.com>
Subject: Re: [PATCH 1/1] usermodehelper: kill
____call_usermodehelper()->set_cpus_allowed_ptr()
On Thu, Dec 05, 2013 at 09:39:03AM -0500, Tejun Heo wrote:
> Hello,
>
> On Thu, Dec 05, 2013 at 03:37:45PM +0100, Oleg Nesterov wrote:
> > > I'm adding Christophe in Cc because he is interested in tweaking the
> > > affinity of call_usermodehelper for cpu isolation. This welcome
> > > cleanup confirms that we want to take the direction of being able to
> > > change the affinity of workqueue themselves instead of just
> > > call_usermodehelper() alone.
> >
> > OK, but I'd like to remind just in case, as Tejun pointed out this
> > patch is wrong ;)
> >
> > And "change the affinity of workqueue themselves" is not simple, but
> > we can make khelper_wq WQ_SYSFS.
>
> Maybe workqueue should implement and expose default attributes which
> are inherited by all workqueues unless they're explicitly overridden?
> The use case here is not really about isolating certain subgroup of
> workers but rather being able to control the default behavior, right?
So the need I know of is to perform CPU isolation on a given set of CPUs.
Their are two known usecases:
1) Optimize the use of the CPU and avoid being disturbed with interrupts, other
tasks, etc... These disturbances trash the cache and steal cputime. We want to
maximize the use of the CPU. So that's the HPC case.
2) Guarantee that the CPU won't be disturbed while some high priority tasks are running.
So we have a critical task that has something to do under a given time deadline or
some bounded latency. That's the real time case.
Case 1) can deal with a few rare noise, but case 2) can't.
In both cases I think that people are happy with having one CPU that handles all
the housekeeping: handle timekeeping, unbound timers, workqueues, RCU callbacks
and random stats maintainance. But this housekeeping could possibly be divided in
the future with some per NUMA node affine housekeepers, who knows? So I think it may be nice to
have some flexibility in how we affine these workqueues, hence a user-driven
action taken through exisiting interfaces like sysfs sounds like a good
candidate.
> Thanks.
>
> --
> tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists