[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160916161951.GH5016@twins.programming.kicks-ass.net>
Date: Fri, 16 Sep 2016 18:19:51 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Andy Lutomirski <luto@...capital.net>
Cc: Ingo Molnar <mingo@...hat.com>,
Mike Galbraith <umgwanakikbuti@...il.com>, kernel-team@...com,
Andrew Morton <akpm@...ux-foundation.org>,
"open list:CONTROL GROUP (CGROUP)" <cgroups@...r.kernel.org>,
Paul Turner <pjt@...gle.com>, Li Zefan <lizefan@...wei.com>,
Linux API <linux-api@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Tejun Heo <tj@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [Documentation] State of CPU controller in cgroup v2
On Fri, Sep 16, 2016 at 08:12:58AM -0700, Andy Lutomirski wrote:
> On Sep 16, 2016 12:51 AM, "Peter Zijlstra" <peterz@...radead.org> wrote:
> >
> > On Thu, Sep 15, 2016 at 01:08:07PM -0700, Andy Lutomirski wrote:
> > > BTW, Mike keeps mentioning exclusive cgroups as problematic with the
> > > no-internal-tasks constraints. Do exclusive cgroups still exist in
> > > cgroup2? Could we perhaps just remove that capability entirely? I've
> > > never understood what problem exlusive cpusets and such solve that
> > > can't be more comprehensibly solved by just assigning the cpusets the
> > > normal inclusive way.
> >
> > Without exclusive sets we cannot split the sched_domain structure.
> > Which leads to not being able to actually partition things. That would
> > break DL for one.
>
> Can you sketch out a toy example?
[ Also see Documentation/cgroup-v1/cpusets.txt section 1.7 ]
mkdir /cpuset
mount -t cgroup -o cpuset none /cpuset
mkdir /cpuset/A
mkdir /cpuset/B
cat /sys/devices/system/node/node0/cpulist > /cpuset/A/cpuset.cpus
echo 0 > /cpuset/A/cpuset.mems
cat /sys/devices/system/node/node1/cpulist > /cpuset/B/cpuset.cpus
echo 1 > /cpuset/B/cpuset.mems
# move all movable tasks into A
cat /cpuset/tasks | while read task; do echo $task > /cpuset/A/tasks ; done
# kill machine wide load-balancing
echo 0 > /cpuset/cpuset.sched_load_balance
# now place 'special' tasks in B
This partitions the scheduler into two, one for each node.
Hereafter no task will be moved from one node to another. The
load-balancer is split in two, one balances in A one balances in B
nothing crosses. (It is important that A.cpus and B.cpus do not
intersect.)
Ideally no task would remain in the root group, back in the day we could
actually do this (with exception of the cpu bound kernel threads), but
this has significantly regressed :-(
(still hate the workqueue affinity interface)
As is, tasks that are left in the root group get balanced within
whatever domain they ended up in.
> And what's DL?
SCHED_DEADLINE, its a 'Global'-EDF like scheduler that doesn't support
CPU affinities (because that doesn't make sense). The only way to
restrict it is to partition.
'Global' because you can partition it. If you reduce your system to
single CPU partitions you'll reduce to P-EDF.
(The same is true of SCHED_FIFO, that's a 'Global'-FIFO on the same
partition scheme, it however does support sched_affinity, but using it
gives 'interesting' schedulability results -- call it a historic
accident).
Note that related, but differently, we have the isolcpus boot parameter
which creates single CPU partitions for all listed CPUs and gives the
rest to the root cpuset. Ideally we'd kill this option given its a boot
time setting (for something which is trivially to do at runtime).
But this cannot be done, because that would mean we'd have to start with
a !0 cpuset layout:
'/'
load_balance=0
/ \
'system' 'isolated'
cpus=~isolcpus cpus=isolcpus
load_balance=0
And start with _everything_ in the /system group (inclding default IRQ
affinities).
Of course, that will break everything cgroup :-(
Powered by blists - more mailing lists