[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200324150707.GB24352@fuller.cnet>
Date: Tue, 24 Mar 2020 12:07:07 -0300
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Chris Friesen <chris.friesen@...driver.com>
Cc: linux-kernel@...r.kernel.org, Christoph Lameter <cl@...ux.com>,
Vu Tran <vu.tran@...driver.com>,
Jim Somerville <Jim.Somerville@...driver.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] affine kernel threads to specified cpumask
Hi Chris,
On Mon, Mar 23, 2020 at 09:29:23AM -0600, Chris Friesen wrote:
> On 3/23/2020 7:54 AM, Marcelo Tosatti wrote:
> >
> > This is a kernel enhancement to configure the cpu affinity of kernel
> > threads via kernel boot option kthread_cpus=<cpulist>.
> >
> > With kthread_cpus specified, the cpumask is immediately applied upon
> > thread launch. This does not affect kernel threads that specify cpu
> > and node.
> >
> > This allows CPU isolation (that is not allowing certain threads
> > to execute on certain CPUs) without using the isolcpus= parameter,
> > making it possible to enable load balancing on such CPUs
> > during runtime.
> >
> > Note-1: this is based off on MontaVista's patch at
> > https://github.com/starlingx-staging/stx-integ/blob/master/kernel/kernel-std/centos/patches/affine-compute-kernel-threads.patch
>
> It's Wind River, not MontaVista. :)
Doh.
> > Difference being that this patch is limited to modifying
> > kernel thread cpumask: Behaviour of other threads can
> > be controlled via cgroups or sched_setaffinity.
>
> What cgroup would the usermode helpers called by the kernel end up in?
> Same as init?
>
> Assuming that's covered, I'm good with this patch.
>
> <snip>
* Runs a user-space application. The application is started
* asynchronously if wait is not set, and runs as a child of system workqueues.
* (ie. it runs with full root capabilities and optimized affinity).
*/
int call_usermodehelper_exec(struct subprocess_info *sub_info, int wait)
{
...
queue_work(system_unbound_wq, &sub_info->work);
And unbound workqueue workers cpumask are controllable:
static void worker_attach_to_pool(struct worker *worker,
struct worker_pool *pool)
{
mutex_lock(&wq_pool_attach_mutex);
/*
* set_cpus_allowed_ptr() will fail if the cpumask doesn't have any
* online CPUs. It'll be re-applied when any of the CPUs come up.
*/
set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
>
> > +static struct cpumask user_cpu_kthread_mask __read_mostly;
> > +static int user_cpu_kthread_mask_valid __read_mostly;
>
> Would it be cleaner to get rid of user_cpu_kthread_mask_valid and just
> move the "if (!cpumask_empty" check into init_kthread_cpumask()? I'm
> not really opinionated, just thinking out loud.
Will get rid of this with Thomas's isolcpus= suggestion.
> > +int __init init_kthread_cpumask(void)
> > +{
> > + if (user_cpu_kthread_mask_valid == 1)
> > + cpumask_copy(&__cpu_kthread_mask, &user_cpu_kthread_mask);
> > + else
> > + cpumask_copy(&__cpu_kthread_mask, cpu_all_mask);
> > +
> > + return 0;
> > +}
> > +
> > +static int __init kthread_setup(char *str)
> > +{
> > + cpulist_parse(str, &user_cpu_kthread_mask);
> > + if (!cpumask_empty(&user_cpu_kthread_mask))
> > + user_cpu_kthread_mask_valid = 1;
> > +
> > + return 1;
> > +}
Powered by blists - more mailing lists