lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 31 Oct 2019 18:23:12 +0100
From:   Valentin Schneider <valentin.schneider@....com>
To:     Michal Koutný <mkoutny@...e.com>
Cc:     linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
        lizefan@...wei.com, tj@...nel.org, hannes@...xchg.org,
        mingo@...nel.org, peterz@...radead.org, vincent.guittot@...aro.org,
        Dietmar.Eggemann@....com, morten.rasmussen@....com,
        qperret@...gle.com, stable@...r.kernel.org
Subject: Re: [PATCH v4 1/2] sched/topology: Don't try to build empty sched
 domains

Hi Michal,

On 31/10/2019 17:23, Michal Koutný wrote:
> On Wed, Oct 23, 2019 at 04:37:44PM +0100, Valentin Schneider <valentin.schneider@....com> wrote:
>> Prevent generate_sched_domains() from returning empty cpumasks, and add
>> some assertion in build_sched_domains() to scream bloody murder if it
>> happens again.
> Good catch. It makes sense to prune the empty domains in
> generate_sched_domains already.
> 
>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>> index c52bc91f882b..c87ee6412b36 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -798,7 +798,8 @@ static int generate_sched_domains(cpumask_var_t **domains,
>>  		    cpumask_subset(cp->cpus_allowed, top_cpuset.effective_cpus))
>>  			continue;
>>  
>> -		if (is_sched_load_balance(cp))
>> +		if (is_sched_load_balance(cp) &&
>> +		    !cpumask_empty(cp->effective_cpus))
>>  			csa[csn++] = cp;
> If I didn't overlook anything, cp->effective_cpus can contain CPUs
> exluded by housekeeping_cpumask(HK_FLAG_DOMAIN) later, i.e. possibly
> still returning domains with empty cpusets.
> 
> I'd suggest moving the emptiness check down into the loop where domain
> cpumasks are ultimately constructed.
> 

Ah, wasn't aware of this - thanks for having a look!

I think I need to have the check before the final cpumask gets built,
because at this point the cpumask array is already built and it's handed
off directly to the sched domain rebuild.

Do you reckon the following would work? 

----8<----
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index c87ee6412b36..e4c10785dc7c 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -798,8 +798,14 @@ static int generate_sched_domains(cpumask_var_t **domains,
 		    cpumask_subset(cp->cpus_allowed, top_cpuset.effective_cpus))
 			continue;
 
+		/*
+		 * Skip cpusets that would lead to an empty sched domain.
+		 * That could be because effective_cpus is empty, or because
+		 * it's only spanning CPUs outside the housekeeping mask.
+		 */
 		if (is_sched_load_balance(cp) &&
-		    !cpumask_empty(cp->effective_cpus))
+		    cpumask_intersects(cp->effective_cpus,
+				       housekeeping_cpumask(HK_FLAG_DOMAIN)))
 			csa[csn++] = cp;
 
 		/* skip @cp's subtree if not a partition root */

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ