lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Nov 2016 23:41:58 +0100 (CET)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Christoph Hellwig <hch@....de>
cc:     linux-kernel@...r.kernel.org
Subject: Re: [PATCH] irq/affinity: fix irq_create_affinity_masks for the
 pre_vectors case

On Tue, 15 Nov 2016, Christoph Hellwig wrote:

> Adjust the exit condition for assigning the affinity vectors to take the
> pre_vectors into account.  Otherwise the last vector will get a cpu mask
> for all CPUs by accidentally hitting the post_vectors case.
> 
> Signed-off-by: Christoph Hellwig <hch@....de>
> ---
>  kernel/irq/affinity.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
> index 17360bd..2ca420a 100644
> --- a/kernel/irq/affinity.c
> +++ b/kernel/irq/affinity.c
> @@ -107,7 +107,9 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
>  		/* Calculate the number of cpus per vector */
>  		ncpus = cpumask_weight(nmsk);
>  
> -		for (v = 0; curvec < affv && v < vecs_to_assign; curvec++, v++) {
> +		for (v = 0;
> +		     curvec < affd->pre_vectors + affv && v < vecs_to_assign;
> +		     curvec++, v++) {
>  			cpus_per_vec = ncpus / vecs_to_assign;
>  
>  			/* Account for extra vectors to compensate rounding errors */

We have the same exit condition in the (affv <= nodes) case and further
down in the outer loop. So the complete patch should be something like
this:

--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -61,6 +61,7 @@ irq_create_affinity_masks(int nvecs, con
 {
 	int n, nodes, vecs_per_node, cpus_per_vec, extra_vecs, curvec;
 	int affv = nvecs - affd->pre_vectors - affd->post_vectors;
+	int last_affv = affv + affd->pre_vectors;
 	nodemask_t nodemsk = NODE_MASK_NONE;
 	struct cpumask *masks;
 	cpumask_var_t nmsk;
@@ -87,7 +88,7 @@ irq_create_affinity_masks(int nvecs, con
 	if (affv <= nodes) {
 		for_each_node_mask(n, nodemsk) {
 			cpumask_copy(masks + curvec, cpumask_of_node(n));
-			if (++curvec == affv)
+			if (++curvec == last_affv)
 				break;
 		}
 		goto done;
@@ -107,7 +108,8 @@ irq_create_affinity_masks(int nvecs, con
 		/* Calculate the number of cpus per vector */
 		ncpus = cpumask_weight(nmsk);
 
-		for (v = 0; curvec < affv && v < vecs_to_assign; curvec++, v++) {
+		for (v = 0; curvec < last_affv && v < vecs_to_assign;
+		     curvec++, v++) {
 			cpus_per_vec = ncpus / vecs_to_assign;
 
 			/* Account for extra vectors to compensate rounding errors */
@@ -119,7 +121,7 @@ irq_create_affinity_masks(int nvecs, con
 			irq_spread_init_one(masks + curvec, nmsk, cpus_per_vec);
 		}
 
-		if (curvec >= affv)
+		if (curvec >= last_affv)
 			break;
 	}
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ