[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200211085443.2a112c03@gandalf.local.home>
Date: Tue, 11 Feb 2020 08:54:43 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: 王贇 <yun.wang@...ux.alibaba.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
"open list:SCHEDULER" <linux-kernel@...r.kernel.org>,
Tejun Heo <tj@...nel.org>, Li Zefan <lizefan@...wei.com>,
Johannes Weiner <hannes@...xchg.org>, cgroups@...r.kernel.org
Subject: Re: [RFC] why can't dynamic isolation just like the static way
You forgot to include the cgroup maintainers.
-- Steve
On Tue, 11 Feb 2020 16:17:34 +0800
王贇 <yun.wang@...ux.alibaba.com> wrote:
> Hi, folks
>
> We are dealing with isolcpus these days and try to do the isolation
> dynamically.
>
> The kernel doc lead us into the cpuset.sched_load_balance, it's fine
> to achieve the dynamic isolation with it, however we got problem with
> the systemd stuff.
>
> It's keeping create cgroup with sched_load_balance enabled on default,
> while the cpus are overlapped with the isolated ones, which lead into
> sched domain rebuild and these cpus become non-isolated.
>
> We're just looking forward an easy way to dynamic isolate some cpus,
> just like the isolation parameter, but sched_load_balance forcing us
> to dealing with the management of cgroups, we really don't get the
> point in here...
>
> Why do we have to mix the isolation with cgroups? Why not just provide
> a proc entry to read cpumask and rebuild the domains?
>
> Please let us know if there is any good reason to make the dynamic
> isolation in that way, appreciated in advance :-)
>
> Regards,
> Michael Wang
Powered by blists - more mailing lists