lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87ikmoqx6o.ffs@tglx>
Date: Mon, 28 Apr 2025 14:37:19 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Daniel Wagner <wagi@...nel.org>, Jens Axboe <axboe@...nel.dk>, Keith
 Busch <kbusch@...nel.org>, Christoph Hellwig <hch@....de>, Sagi Grimberg
 <sagi@...mberg.me>, "Michael S. Tsirkin" <mst@...hat.com>
Cc: "Martin K. Petersen" <martin.petersen@...cle.com>, Costa Shulyupin
 <costa.shul@...hat.com>, Juri Lelli <juri.lelli@...hat.com>, Valentin
 Schneider <vschneid@...hat.com>, Waiman Long <llong@...hat.com>, Ming Lei
 <ming.lei@...hat.com>, Frederic Weisbecker <frederic@...nel.org>, Mel
 Gorman <mgorman@...e.de>, Hannes Reinecke <hare@...e.de>, Mathieu
 Desnoyers <mathieu.desnoyers@...icios.com>, linux-kernel@...r.kernel.org,
 linux-block@...r.kernel.org, linux-nvme@...ts.infradead.org,
 megaraidlinux.pdl@...adcom.com, linux-scsi@...r.kernel.org,
 storagedev@...rochip.com, virtualization@...ts.linux.dev,
 GR-QLogic-Storage-Upstream@...vell.com, Daniel Wagner <wagi@...nel.org>
Subject: Re: [PATCH v6 1/9] lib/group_cpus: let group_cpu_evenly return
 number initialized masks

On Thu, Apr 24 2025 at 20:19, Daniel Wagner wrote:

"let group_cpu_evenly return number initialized masks' is not a
sentence.

  Let group_cpu_evenly() return the number of initialized masks

is actually parseable.

> group_cpu_evenly might allocated less groups then the requested:

group_cpu_evenly() might have .... then requested.

> group_cpu_evenly
>   __group_cpus_evenly
>     alloc_nodes_groups
>       # allocated total groups may be less than numgrps when
>       # active total CPU number is less then numgrps
>
> In this case, the caller will do an out of bound access because the
> caller assumes the masks returned has numgrps.
>
> Return the number of groups created so the caller can limit the access
> range accordingly.
>
> --- a/include/linux/group_cpus.h
> +++ b/include/linux/group_cpus.h
> @@ -9,6 +9,7 @@
>  #include <linux/kernel.h>
>  #include <linux/cpu.h>
>  
> -struct cpumask *group_cpus_evenly(unsigned int numgrps);
> +struct cpumask *group_cpus_evenly(unsigned int numgrps,
> +				  unsigned int *nummasks);

One line

>  #endif
> diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
> index 44a4eba80315cc098ecfa366ca1d88483641b12a..d2aefab5eb2b929877ced43f48b6268098484bd7 100644
> --- a/kernel/irq/affinity.c
> +++ b/kernel/irq/affinity.c
> @@ -70,20 +70,21 @@ irq_create_affinity_masks(unsigned int nvecs, struct irq_affinity *affd)
>  	 */
>  	for (i = 0, usedvecs = 0; i < affd->nr_sets; i++) {
>  		unsigned int this_vecs = affd->set_size[i];
> +		unsigned int nr_masks;

  unsigned int nr_masks, this_vect = ....

>  		int j;

As yoou touch the loop anyway, move this into the for ()

> -		struct cpumask *result = group_cpus_evenly(this_vecs);
> +		struct cpumask *result = group_cpus_evenly(this_vecs, &nr_masks);
>  
>  		if (!result) {
>  			kfree(masks);
>  			return NULL;
>  		}
>  
> -		for (j = 0; j < this_vecs; j++)

                for (int j = 0; ....)

> +		for (j = 0; j < nr_masks; j++)
>  			cpumask_copy(&masks[curvec + j].mask, &result[j]);
>  		kfree(result);
>  
> -		curvec += this_vecs;
> -		usedvecs += this_vecs;
> +		curvec += nr_masks;
> +		usedvecs += nr_masks;
>  	}
>  
>  	/* Fill out vectors at the end that don't need affinity */
> diff --git a/lib/group_cpus.c b/lib/group_cpus.c
> index ee272c4cefcc13907ce9f211f479615d2e3c9154..016c6578a07616959470b47121459a16a1bc99e5 100644
> --- a/lib/group_cpus.c
> +++ b/lib/group_cpus.c
> @@ -332,9 +332,11 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
>  /**
>   * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
>   * @numgrps: number of groups
> + * @nummasks: number of initialized cpumasks
>   *
>   * Return: cpumask array if successful, NULL otherwise. And each element
> - * includes CPUs assigned to this group
> + * includes CPUs assigned to this group. nummasks contains the number
> + * of initialized masks which can be less than numgrps.
>   *
>   * Try to put close CPUs from viewpoint of CPU and NUMA locality into
>   * same group, and run two-stage grouping:
> @@ -344,7 +346,8 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
>   * We guarantee in the resulted grouping that all CPUs are covered, and
>   * no same CPU is assigned to multiple groups
>   */
> -struct cpumask *group_cpus_evenly(unsigned int numgrps)
> +struct cpumask *group_cpus_evenly(unsigned int numgrps,
> +				  unsigned int *nummasks)

No line break required.

>  {
>  	unsigned int curgrp = 0, nr_present = 0, nr_others = 0;
>  	cpumask_var_t *node_to_cpumask;
> @@ -421,10 +424,12 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps)
>  		kfree(masks);
>  		return NULL;
>  	}
> +	*nummasks = nr_present + nr_others;
>  	return masks;
>  }
>  #else /* CONFIG_SMP */
> -struct cpumask *group_cpus_evenly(unsigned int numgrps)
> +struct cpumask *group_cpus_evenly(unsigned int numgrps,
> +				  unsigned int *nummasks)

Ditto.

Other than that:

Acked-by: Thomas Gleixner <tglx@...utronix.de>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ