lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 2 Nov 2018 09:09:50 -0600
From:   Keith Busch <keith.busch@...el.com>
To:     Ming Lei <ming.lei@...hat.com>
Cc:     Jens Axboe <axboe@...nel.dk>, linux-block@...r.kernel.org,
        linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 13/16] irq: add support for allocating (and affinitizing)
 sets of IRQs

On Fri, Nov 02, 2018 at 10:37:07PM +0800, Ming Lei wrote:
> On Tue, Oct 30, 2018 at 12:32:49PM -0600, Jens Axboe wrote:
> > A driver may have a need to allocate multiple sets of MSI/MSI-X
> > interrupts, and have them appropriately affinitized. Add support for
> > defining a number of sets in the irq_affinity structure, of varying
> > sizes, and get each set affinitized correctly across the machine.
> > 
> > Cc: Thomas Gleixner <tglx@...utronix.de>
> > Cc: linux-kernel@...r.kernel.org
> > Reviewed-by: Hannes Reinecke <hare@...e.com>
> > Reviewed-by: Ming Lei <ming.lei@...hat.com>
> > Signed-off-by: Jens Axboe <axboe@...nel.dk>
> > ---
> >  drivers/pci/msi.c         | 14 ++++++++++++++
> >  include/linux/interrupt.h |  4 ++++
> >  kernel/irq/affinity.c     | 40 ++++++++++++++++++++++++++++++---------
> >  3 files changed, 49 insertions(+), 9 deletions(-)
> > 
> > diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> > index af24ed50a245..e6c6e10b9ceb 100644
> > --- a/drivers/pci/msi.c
> > +++ b/drivers/pci/msi.c
> > @@ -1036,6 +1036,13 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
> >  	if (maxvec < minvec)
> >  		return -ERANGE;
> >  
> > +	/*
> > +	 * If the caller is passing in sets, we can't support a range of
> > +	 * vectors. The caller needs to handle that.
> > +	 */
> > +	if (affd->nr_sets && minvec != maxvec)
> > +		return -EINVAL;
> > +
> >  	if (WARN_ON_ONCE(dev->msi_enabled))
> >  		return -EINVAL;
> >  
> > @@ -1087,6 +1094,13 @@ static int __pci_enable_msix_range(struct pci_dev *dev,
> >  	if (maxvec < minvec)
> >  		return -ERANGE;
> >  
> > +	/*
> > +	 * If the caller is passing in sets, we can't support a range of
> > +	 * supported vectors. The caller needs to handle that.
> > +	 */
> > +	if (affd->nr_sets && minvec != maxvec)
> > +		return -EINVAL;
> > +
> >  	if (WARN_ON_ONCE(dev->msix_enabled))
> >  		return -EINVAL;
> >  
> > diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
> > index 1d6711c28271..ca397ff40836 100644
> > --- a/include/linux/interrupt.h
> > +++ b/include/linux/interrupt.h
> > @@ -247,10 +247,14 @@ struct irq_affinity_notify {
> >   *			the MSI(-X) vector space
> >   * @post_vectors:	Don't apply affinity to @post_vectors at end of
> >   *			the MSI(-X) vector space
> > + * @nr_sets:		Length of passed in *sets array
> > + * @sets:		Number of affinitized sets
> >   */
> >  struct irq_affinity {
> >  	int	pre_vectors;
> >  	int	post_vectors;
> > +	int	nr_sets;
> > +	int	*sets;
> >  };
> >  
> >  #if defined(CONFIG_SMP)
> > diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
> > index f4f29b9d90ee..2046a0f0f0f1 100644
> > --- a/kernel/irq/affinity.c
> > +++ b/kernel/irq/affinity.c
> > @@ -180,6 +180,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
> >  	int curvec, usedvecs;
> >  	cpumask_var_t nmsk, npresmsk, *node_to_cpumask;
> >  	struct cpumask *masks = NULL;
> > +	int i, nr_sets;
> >  
> >  	/*
> >  	 * If there aren't any vectors left after applying the pre/post
> > @@ -210,10 +211,23 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
> >  	get_online_cpus();
> >  	build_node_to_cpumask(node_to_cpumask);
> >  
> > -	/* Spread on present CPUs starting from affd->pre_vectors */
> > -	usedvecs = irq_build_affinity_masks(affd, curvec, affvecs,
> > -					    node_to_cpumask, cpu_present_mask,
> > -					    nmsk, masks);
> > +	/*
> > +	 * Spread on present CPUs starting from affd->pre_vectors. If we
> > +	 * have multiple sets, build each sets affinity mask separately.
> > +	 */
> > +	nr_sets = affd->nr_sets;
> > +	if (!nr_sets)
> > +		nr_sets = 1;
> > +
> > +	for (i = 0, usedvecs = 0; i < nr_sets; i++) {
> > +		int this_vecs = affd->sets ? affd->sets[i] : affvecs;
> > +		int nr;
> > +
> > +		nr = irq_build_affinity_masks(affd, curvec, this_vecs,
> > +					      node_to_cpumask, cpu_present_mask,
> > +					      nmsk, masks + usedvecs);
> 
> The last parameter of the above function should have been 'masks',
> because irq_build_affinity_masks() always treats 'masks' as the base
> address of the array.

We have multiple "bases" when using sets, so we have to update which
base to use by adding accordingly. If you just use 'masks', then you're
going to overwrite your masks from the previous set.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ