[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YbedxU6tBfEiOWkC@slm.duckdns.org>
Date: Mon, 13 Dec 2021 09:23:49 -1000
From: Tejun Heo <tj@...nel.org>
To: Honglei Wang <wanghonglei@...ichuxing.com>
Cc: Zefan Li <lizefan.x@...edance.com>,
Johannes Weiner <hannes@...xchg.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
jameshongleiwang@....com
Subject: Re: [RESEND PATCH RFC] cgroup: support numabalancing disable in
cgroup level
Hello,
On Mon, Dec 13, 2021 at 11:05:06PM +0800, Honglei Wang wrote:
> +#ifdef CONFIG_NUMA_BALANCING
> +static void __cgroup_numabalancing_disable_set(struct cgroup *cgrp, bool nb_disable)
> +{
> + struct css_task_iter it;
> + struct task_struct *task;
> +
> + lockdep_assert_held(&cgroup_mutex);
> +
> + spin_lock_irq(&css_set_lock);
> + if (nb_disable)
> + set_bit(CGRP_NUMABALANCING_DISABLE, &cgrp->flags);
> + else
> + clear_bit(CGRP_NUMABALANCING_DISABLE, &cgrp->flags);
> + spin_unlock_irq(&css_set_lock);
> +
> + css_task_iter_start(&cgrp->self, 0, &it);
> + while ((task = css_task_iter_next(&it))) {
> + /*
> + * We don't care about NUMA placement if the task is exiting.
> + * And we don't NUMA balance for kthreads.
> + */
> + if (task->flags & (PF_EXITING | PF_KTHREAD))
> + continue;
> + task->numa_cgrp_disable = nb_disable;
> + }
> + css_task_iter_end(&it);
> +}
All it's doing is setting some property recursively and I don't think it
makes sense to keep expanding cgroup interface for this sort of usage. It's
not distributing any resource in hierarchical way and the whole feature can
be replaced by inheritable per-process interface with some scripting. Unless
there are some other compelling reasons, this is gonna be a strong nack from
cgroup side.
Thanks.
--
tejun
Powered by blists - more mailing lists