lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 31 Oct 2018 08:32:36 -0600
From:   Jens Axboe <axboe@...nel.dk>
To:     Sagi Grimberg <sagi@...mberg.me>, linux-block@...r.kernel.org,
        linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 14/16] nvme: utilize two queue maps, one for reads and one
 for writes

On 10/30/18 7:57 PM, Sagi Grimberg wrote:
> 
>> +static int queue_irq_offset(struct nvme_dev *dev)
>> +{
>> +	/* if we have more than 1 vec, admin queue offsets us 1 */
> 
> offsets us by 1?

Fixed

>> @@ -1934,13 +2048,48 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
>>   	 * setting up the full range we need.
>>   	 */
>>   	pci_free_irq_vectors(pdev);
>> -	result = pci_alloc_irq_vectors_affinity(pdev, 1, nr_io_queues + 1,
>> -			PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, &affd);
>> -	if (result <= 0)
>> -		return -EIO;
>> +
>> +	/*
>> +	 * For irq sets, we have to ask for minvec == maxvec. This passes
>> +	 * any reduction back to us, so we can adjust our queue counts and
>> +	 * IRQ vector needs.
>> +	 */
>> +	do {
>> +		nvme_calc_io_queues(dev, nr_io_queues);
>> +		irq_sets[0] = dev->io_queues[NVMEQ_TYPE_READ];
>> +		irq_sets[1] = dev->io_queues[NVMEQ_TYPE_WRITE];
>> +		if (!irq_sets[1])
>> +			affd.nr_sets = 1;
>> +
>> +		/*
>> +		 * Need IRQs for read+write queues, and one for the admin queue
>> +		 */
>> +		nr_io_queues = irq_sets[0] + irq_sets[1] + 1;
>> +
>> +		result = pci_alloc_irq_vectors_affinity(pdev, nr_io_queues,
>> +				nr_io_queues,
>> +				PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, &affd);
>> +
>> +		/*
>> +		 * Need to reduce our vec counts
>> +		 */
>> +		if (result == -ENOSPC) {
>> +			nr_io_queues--;
>> +			if (!nr_io_queues)
>> +				return result;
>> +			continue;
>> +		} else if (result <= 0)
>> +			return -EIO;
>> +		break;
>> +	} while (1);
>> +
>>   	dev->num_vecs = result;
>>   	dev->max_qid = max(result - 1, 1);
>>   
>> +	dev_info(dev->ctrl.device, "%d/%d/%d read/write queues\n",
>> +					dev->io_queues[NVMEQ_TYPE_READ],
>> +					dev->io_queues[NVMEQ_TYPE_WRITE]);
>> +
> 
> Perhaps it would be better if we move this code into a function.

Agree, I've done that now.

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ