[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f4ed094f-7370-4121-9df6-454411452751@flourine.local>
Date: Wed, 3 Sep 2025 14:42:10 +0200
From: Daniel Wagner <dwagner@...e.de>
To: John Garry <john.g.garry@...cle.com>
Cc: Daniel Wagner <wagi@...nel.org>, Jens Axboe <axboe@...nel.dk>,
Keith Busch <kbusch@...nel.org>, Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
"Michael S. Tsirkin" <mst@...hat.com>, Aaron Tomlin <atomlin@...mlin.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>, Thomas Gleixner <tglx@...utronix.de>,
Costa Shulyupin <costa.shul@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
Valentin Schneider <vschneid@...hat.com>, Waiman Long <llong@...hat.com>, Ming Lei <ming.lei@...hat.com>,
Frederic Weisbecker <frederic@...nel.org>, Mel Gorman <mgorman@...e.de>, Hannes Reinecke <hare@...e.de>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
linux-nvme@...ts.infradead.org, megaraidlinux.pdl@...adcom.com, linux-scsi@...r.kernel.org,
storagedev@...rochip.com, virtualization@...ts.linux.dev,
GR-QLogic-Storage-Upstream@...vell.com
Subject: Re: [PATCH v7 01/10] lib/group_cpus: Add group_masks_cpus_evenly()
On Fri, Jul 11, 2025 at 09:28:12AM +0100, John Garry wrote:
> > +/**
> > + * group_mask_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
> > + * @numgrps: number of groups
>
> this comment could be a bit more useful
>
> > + * @cpu_mask: CPU to consider for the grouping
>
> this is a CPU mask, and not a specific CPU index, right?
Yes, I've updated the documentation to:
/**
* group_mask_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
* @numgrps: number of cpumasks to create
* @mask: CPUs to consider for the grouping
* @nummasks: number of initialized cpusmasks
*
* Return: cpumask array if successful, NULL otherwise. Only the CPUs
* marked in the mask will be considered for the grouping. And each
* element includes CPUs assigned to this group. nummasks contains the
* number of initialized masks which can be less than numgrps. cpu_mask
*
* Try to put close CPUs from viewpoint of CPU and NUMA locality into
* same group, and run two-stage grouping:
* 1) allocate present CPUs on these groups evenly first
* 2) allocate other possible CPUs on these groups evenly
*
* We guarantee in the resulted grouping that all CPUs are covered, and
* no same CPU is assigned to multiple groups
*/
struct cpumask *group_mask_cpus_evenly(unsigned int numgrps,
const struct cpumask *mask,
unsigned int *nummasks)
> > + ret = __group_cpus_evenly(0, numgrps, node_to_cpumask, cpu_mask, nmsk,
> > + masks);
>
> maybe it's personal taste, but I don't think that it's a good style to
> always pass through 'fail' labels, even if we have not failed in some
> step
I'd rather leave it as it is, because it matches the existing code in
group_cpus_evenly. And there is also alloc_node_to_cpumask which does
the same. Consistency wins IMO.
Powered by blists - more mailing lists