[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f7cad09a-09e3-e150-d505-ac75aece0248@arm.com>
Date: Tue, 19 Jan 2021 16:37:46 +0100
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Daniel Bristot de Oliveira <bristot@...hat.com>,
linux-kernel@...r.kernel.org
Cc: Marco Perronet <perronet@...-sws.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Li Zefan <lizefan@...wei.com>, Tejun Heo <tj@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Valentin Schneider <valentin.schneider@....com>,
cgroups@...r.kernel.org
Subject: Re: [PATCH 4/6] sched/deadline: Block DL tasks on non-exclusive
cpuset if bandwitdh control is enable
On 19/01/2021 10:41, Daniel Bristot de Oliveira wrote:
> On 1/14/21 4:51 PM, Dietmar Eggemann wrote:
>> On 12/01/2021 16:53, Daniel Bristot de Oliveira wrote:
[...]
>> with this patch:
>>
>> cgroupv1:
>>
>> root@...o:/sys/fs/cgroup/cpuset# chrt -d --sched-period 1000000000
>> --sched-runtime 100000000 0 sleep 500 &
>> [1] 1668
>> root@...o:/sys/fs/cgroup/cpuset# PID1=$!
>>
>> root@...o:/sys/fs/cgroup/cpuset# chrt -d --sched-period 1000000000
>> --sched-runtime 100000000 0 sleep 500 &
>> [2] 1669
>> root@...o:/sys/fs/cgroup/cpuset# PID2=$!
>>
>> root@...o:/sys/fs/cgroup/cpuset# mkdir A
>>
>> root@...o:/sys/fs/cgroup/cpuset# echo 0 > ./A/cpuset.mems
>> root@...o:/sys/fs/cgroup/cpuset# echo 0 > ./A/cpuset.cpus
>>
>> root@...o:/sys/fs/cgroup/cpuset# echo $PID2 > ./A/cgroup.procs
>> -bash: echo: write error: Device or resource busy
>>
>> root@...o:/sys/fs/cgroup/cpuset# echo 1 > ./A/cpuset.cpu_exclusive
>>
>> root@...o:/sys/fs/cgroup/cpuset# echo $PID2 > ./A/cgroup.procs
>>
>> root@...o:/sys/fs/cgroup/cpuset# cat /proc/$PID1/status | grep
>> Cpus_allowed_list | awk '{print $2}'
>> 0-5
>> root@...o:/sys/fs/cgroup/cpuset# cat /proc/$PID2/status | grep
>> Cpus_allowed_list | awk '{print $2}'
>> 0
>
> On CPU v1 we also need to disable the load balance to create a root domain, right?
IMHO, that's not necessary for this example. But yes, if we create 2
exclusive cpusets A and B we want to turn off load-balancing on root
level. It also doesn't hurt doing this in this example. But we end up
with no sched domain since load-balance is disabled at root and A only
contains CPU0.
root@...o:/sys/fs/cgroup/cpuset# echo 0 > cpuset.sched_load_balance
ls /proc/sys/kernel/sched_domain/cpu*/ doesn't show any (sched) domains.
>> cgroupv2:
>
> Yeah, I see your point. I was seeing a different output because of Fedora
> default's behavior of adding the tasks to the system.slice/user.slice...
>
> doing:
>
>> root@...o:/sys/fs/cgroup# echo +cpuset > cgroup.subtree_control
>
> # echo $$ > cgroup.procs
The current shell should be already in the root cgroup?
root@...o:/sys/fs/cgroup# echo $$
1644
root@...o:/sys/fs/cgroup# cat cgroup.procs | grep $$
1644
[...]
Powered by blists - more mailing lists