lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 15 Feb 2019 09:24:36 +0000
From:   Marc Zyngier <marc.zyngier@....com>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Ming Lei <ming.lei@...hat.com>, Christoph Hellwig <hch@....de>,
        Bjorn Helgaas <helgaas@...nel.org>,
        Jens Axboe <axboe@...nel.dk>, <linux-block@...r.kernel.org>,
        Sagi Grimberg <sagi@...mberg.me>,
        <linux-nvme@...ts.infradead.org>, <linux-pci@...r.kernel.org>,
        Keith Busch <keith.busch@...el.com>,
        Sumit Saxena <sumit.saxena@...adcom.com>,
        Kashyap Desai <kashyap.desai@...adcom.com>,
        Shivasharan Srikanteshwara 
        <shivasharan.srikanteshwara@...adcom.com>
Subject: Re: [patch V5 4/8] nvme-pci: Simplify interrupt allocation

On Thu, 14 Feb 2019 20:47:59 +0000,
Thomas Gleixner <tglx@...utronix.de> wrote:
> 
> From: Ming Lei <ming.lei@...hat.com>
> 
> The NVME PCI driver contains a tedious mechanism for interrupt
> allocation, which is necessary to adjust the number and size of interrupt
> sets to the maximum available number of interrupts which depends on the
> underlying PCI capabilities and the available CPU resources.
> 
> It works around the former short comings of the PCI and core interrupt
> allocation mechanims in combination with interrupt sets.
> 
> The PCI interrupt allocation function allows to provide a maximum and a
> minimum number of interrupts to be allocated and tries to allocate as
> many as possible. This worked without driver interaction as long as there
> was only a single set of interrupts to handle.
> 
> With the addition of support for multiple interrupt sets in the generic
> affinity spreading logic, which is invoked from the PCI interrupt
> allocation, the adaptive loop in the PCI interrupt allocation did not
> work for multiple interrupt sets. The reason is that depending on the
> total number of interrupts which the PCI allocation adaptive loop tries
> to allocate in each step, the number and the size of the interrupt sets
> need to be adapted as well. Due to the way the interrupt sets support was
> implemented there was no way for the PCI interrupt allocation code or the
> core affinity spreading mechanism to invoke a driver specific function
> for adapting the interrupt sets configuration.
> 
> As a consequence the driver had to implement another adaptive loop around
> the PCI interrupt allocation function and calling that with maximum and
> minimum interrupts set to the same value. This ensured that the
> allocation either succeeded or immediately failed without any attempt to
> adjust the number of interrupts in the PCI code.
> 
> The core code now allows drivers to provide a callback to recalculate the
> number and the size of interrupt sets during PCI interrupt allocation,
> which in turn allows the PCI interrupt allocation function to be called
> in the same way as with a single set of interrupts. The PCI code handles
> the adaptive loop and the interrupt affinity spreading mechanism invokes
> the driver callback to adapt the interrupt set configuration to the
> current loop value. This replaces the adaptive loop in the driver
> completely.
> 
> Implement the NVME specific callback which adjusts the interrupt sets
> configuration and remove the adaptive allocation loop.
> 
> [ tglx: Simplify the callback further and restore the dropped adjustment of
>   	number of sets ]
> 
> Signed-off-by: Ming Lei <ming.lei@...hat.com>
> Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
> 
> ---
>  drivers/nvme/host/pci.c |  108 ++++++++++++------------------------------------
>  1 file changed, 28 insertions(+), 80 deletions(-)
> 
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -2041,41 +2041,32 @@ static int nvme_setup_host_mem(struct nv
>  	return ret;
>  }
>  
> -/* irq_queues covers admin queue */
> -static void nvme_calc_io_queues(struct nvme_dev *dev, unsigned int irq_queues)
> +/*
> + * nirqs is the number of interrupts available for write and read
> + * queues. The core already reserved an interrupt for the admin queue.
> + */
> +static void nvme_calc_irq_sets(struct irq_affinity *affd, unsigned int nrirqs)
>  {
> -	unsigned int this_w_queues = write_queues;
> -
> -	WARN_ON(!irq_queues);
> -
> -	/*
> -	 * Setup read/write queue split, assign admin queue one independent
> -	 * irq vector if irq_queues is > 1.
> -	 */
> -	if (irq_queues <= 2) {
> -		dev->io_queues[HCTX_TYPE_DEFAULT] = 1;
> -		dev->io_queues[HCTX_TYPE_READ] = 0;
> -		return;
> -	}
> +	struct nvme_dev *dev = affd->priv;
> +	unsigned int nr_read_queues;
>  
>  	/*
> -	 * If 'write_queues' is set, ensure it leaves room for at least
> -	 * one read queue and one admin queue
> -	 */
> -	if (this_w_queues >= irq_queues)
> -		this_w_queues = irq_queues - 2;
> -
> -	/*
> -	 * If 'write_queues' is set to zero, reads and writes will share
> -	 * a queue set.
> -	 */
> -	if (!this_w_queues) {
> -		dev->io_queues[HCTX_TYPE_DEFAULT] = irq_queues - 1;
> -		dev->io_queues[HCTX_TYPE_READ] = 0;
> -	} else {
> -		dev->io_queues[HCTX_TYPE_DEFAULT] = this_w_queues;
> -		dev->io_queues[HCTX_TYPE_READ] = irq_queues - this_w_queues - 1;
> -	}
> +	 * If only one interrupt is available, combine write and read
> +	 * queues. If 'write_queues' is set, ensure it leaves room for at
> +	 * least one read queue.

[Full disclaimer: I only have had two coffees this morning, and it is
only at the fourth that my brain is able to kick in...]

I don't know much about NVME, but I feel like there is a small
disconnect between the code and the above comment, which says "leave
room for at least one read queue"...

> +	 */
> +	if (nrirqs == 1)
> +		nr_read_queues = 0;
> +	else if (write_queues >= nrirqs)
> +		nr_read_queues = nrirqs - 1;

... while this seem to ensure that we carve out one write queue out of
the irq set. It looks like a departure from the original code, which
would set nr_read_queues to 1 in that particular case.

Thanks,

	M.

-- 
Jazz is not dead, it just smell funny.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ