lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 13 Jun 2017 15:04:30 -0500
From:   Michael Bringmann <mwb@...ux.vnet.ibm.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     Lai Jiangshan <jiangshanlai@...il.com>,
        linux-kernel@...r.kernel.org,
        Nathan Fontenot <nfont@...ux.vnet.ibm.com>
Subject: Re: [PATCH] workqueue: Ensure that cpumask set for pools created
 after boot

Hello:

On 06/12/2017 12:32 PM, Tejun Heo wrote:
> Hello,
> 
> On Mon, Jun 12, 2017 at 12:10:49PM -0500, Michael Bringmann wrote:
>>> The reason why we're ending up with empty masks is because
>>> wq_calc_node_cpumask() is assuming that the possible node cpumask is
>>> always a superset of online (as it should).  We can trigger a fat
>>> warning there if that isn't so and just return false from that
>>> function.
>>
>> What would that look like?  I should be able to test it on top of the
>> other changes / corrections.
> 
> So, the function looks like the following now.
> 
>   static bool wq_calc_node_cpumask(const struct workqueue_attrs *attrs, int node,
> 				   int cpu_going_down, cpumask_t *cpumask)
>   {
> 	  if (!wq_numa_enabled || attrs->no_numa)
> 		  goto use_dfl;
> 
> 	  /* does @node have any online CPUs @attrs wants? */
> A:	cpumask_and(cpumask, cpumask_of_node(node), attrs->cpumask);
> 	  if (cpu_going_down >= 0)
> 		  cpumask_clear_cpu(cpu_going_down, cpumask);
> 
> B:	if (cpumask_empty(cpumask))
> 		  goto use_dfl;
> 
> 	  /* yeap, return possible CPUs in @node that @attrs wants */
> C:	cpumask_and(cpumask, attrs->cpumask, wq_numa_possible_cpumask[node]);
> 	  return !cpumask_equal(cpumask, attrs->cpumask);
> 
>   use_dfl:
> 	  cpumask_copy(cpumask, attrs->cpumask);
> 	  return false;
>   }
> 
> A is calculating the target cpumask to use using the online map.  B
> falls back to dfl mask if the intersection is empty.  C calculates the
> eventual mask to use from the intersection of possible mask and what's
> requested.  The assumption is that because possible is a superset of
> online, C's result can't be smaller than A.
> 
> So, what we can do is if to calculate the possible intersection,
> compare it against the online intersection, and if the latter is
> bigger, trigger a big fat warning and return false there.

So the revisions to the function would look like, correct?

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c74bf39..5d7674e 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3564,19 +3564,28 @@ static struct pool_workqueue *alloc_unbound_pwq(struct workqueue_struct *wq,
 static bool wq_calc_node_cpumask(const struct workqueue_attrs *attrs, int node,
 				 int cpu_going_down, cpumask_t *cpumask)
 {
+	cpumask_t	onl_targ_cm;
+
 	if (!wq_numa_enabled || attrs->no_numa)
 		goto use_dfl;
 
 	/* does @node have any online CPUs @attrs wants? */
-	cpumask_and(cpumask, cpumask_of_node(node), attrs->cpumask);
+	cpumask_and(&onl_targ_cm, cpumask_of_node(node), attrs->cpumask);
 	if (cpu_going_down >= 0)
-		cpumask_clear_cpu(cpu_going_down, cpumask);
+		cpumask_clear_cpu(cpu_going_down, &onl_targ_cm);
 
-	if (cpumask_empty(cpumask))
+	if (cpumask_empty(&onl_targ_cm))
 		goto use_dfl;
 
 	/* yeap, return possible CPUs in @node that @attrs wants */
 	cpumask_and(cpumask, attrs->cpumask, wq_numa_possible_cpumask[node]);
+
+	if (cpumask_weight(&onl_targ_cm) > cpumask_weight(cpumask)) {
+		printk(KERN_INFO "WARNING: WQ cpumask: onl intersect > "
+						"possible intersect\n");
+		return false;
+	}
+
 	return !cpumask_equal(cpumask, attrs->cpumask);
 
 use_dfl:

> 
>>> I have no idea about the specifics of ppc but at least the code base
>>> we have currently expect all possible cpus and nodes and their
>>> mappings to be established on boot.
>>
>> Hopefully, the new properties will fix the holes in the current implementation
>> with regard to hot-add.
> 
> Yeah, that's the only proper fix here.

I incorporated other changes to try to fill in the possible map more accurately,
and with only the above modification to workqueue.c, I ran a hot-add CPU test
to add 8 VPs to a powerpc Shared CPU configuration.  It produced a lot of the
warning messages -- a lot more than I was expecting.  But it did not crash.
I have attached a compressed copy of the log file, here.

> 
> Thanks.
> 

Thanks.

-- 
Michael W. Bringmann
Linux Technology Center
IBM Corporation
Tie-Line  363-5196
External: (512) 286-5196
Cell:       (512) 466-0650
mwb@...ux.vnet.ibm.com

Download attachment "boot-run.log.xz" of type "application/x-xz" (14284 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ