[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1679cb16-a4a1-4a5f-9742-3523555d33f9@bursov.com>
Date: Thu, 28 Mar 2024 18:27:20 +0200
From: Vitalii Bursov <vitaly@...sov.com>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>, Daniel Bristot de Oliveira
<bristot@...hat.com>, Valentin Schneider <vschneid@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] sched/fair: allow disabling newidle_balance with
sched_relax_domain_level
On 28.03.24 16:43, Vincent Guittot wrote:
> On Thu, 28 Mar 2024 at 01:31, Vitalii Bursov <vitaly@...sov.com> wrote:
>>
>> Change relax_domain_level checks so that it would be possible
>> to exclude all domains from newidle balancing.
>>
>> This matches the behavior described in the documentation:
>> -1 no request. use system default or follow request of others.
>> 0 no search.
>> 1 search siblings (hyperthreads in a core).
>>
>> "2" enables levels 0 and 1, level_max excludes the last (level_max)
>> level, and level_max+1 includes all levels.
>
> I was about to say that max+1 is useless because it's the same as -1
> but it's not exactly the same because it can supersede the system wide
> default_relax_domain_level. I wonder if one should be able to enable
> more levels than what the system has set by default.
I don't know is such systems exist, but cpusets.rst suggests that
increasing it beyoud the default value is possible:
> If your situation is:
>
> - The migration costs between each cpu can be assumed considerably
> small(for you) due to your special application's behavior or
> special hardware support for CPU cache etc.
> - The searching cost doesn't have impact(for you) or you can make
> the searching cost enough small by managing cpuset to compact etc.
> - The latency is required even it sacrifices cache hit rate etc.
> then increasing 'sched_relax_domain_level' would benefit you.
>>
>> Signed-off-by: Vitalii Bursov <vitaly@...sov.com>
>> ---
>> kernel/cgroup/cpuset.c | 2 +-
>> kernel/sched/debug.c | 1 +
>> kernel/sched/topology.c | 2 +-
>> 3 files changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>> index 4237c874871..da24187c4e0 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -2948,7 +2948,7 @@ bool current_cpuset_is_being_rebound(void)
>> static int update_relax_domain_level(struct cpuset *cs, s64 val)
>> {
>> #ifdef CONFIG_SMP
>> - if (val < -1 || val >= sched_domain_level_max)
>> + if (val < -1 || val > sched_domain_level_max + 1)
>> return -EINVAL;
>> #endif
>>
>> diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
>> index 8d5d98a5834..8454cd4e5e1 100644
>> --- a/kernel/sched/debug.c
>> +++ b/kernel/sched/debug.c
>> @@ -423,6 +423,7 @@ static void register_sd(struct sched_domain *sd, struct dentry *parent)
>>
>> #undef SDM
>>
>> + debugfs_create_u32("level", 0444, parent, (u32 *)&sd->level);
>
> IMO, this should be a separate patch as it's not part of the fix
Thanks, I'll split it.
>> debugfs_create_file("flags", 0444, parent, &sd->flags, &sd_flags_fops);
>> debugfs_create_file("groups_flags", 0444, parent, &sd->groups->flags, &sd_flags_fops);
>> }
>> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
>> index 99ea5986038..3127c9b30af 100644
>> --- a/kernel/sched/topology.c
>> +++ b/kernel/sched/topology.c
>> @@ -1468,7 +1468,7 @@ static void set_domain_attribute(struct sched_domain *sd,
>> } else
>> request = attr->relax_domain_level;
>>
>> - if (sd->level > request) {
>> + if (sd->level >= request) {
>
> good catch and worth :
> Fixes: 9ae7ab20b483 ("sched/topology: Don't set SD_BALANCE_WAKE on
> cpuset domain relax")
>
Will add this.
Thanks.
>
>> /* Turn off idle balance on this domain: */
>> sd->flags &= ~(SD_BALANCE_WAKE|SD_BALANCE_NEWIDLE);
>> }
>> --
>> 2.20.1
>>
Powered by blists - more mailing lists