lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <D450A21F.3537A%manish.rangankar@cavium.com>
Date:   Tue, 15 Nov 2016 06:14:18 +0000
From:   "Rangankar, Manish" <Manish.Rangankar@...ium.com>
To:     "Martin K. Petersen" <martin.petersen@...cle.com>,
        "lduncan@...e.com" <lduncan@...e.com>,
        Chris Leech <cleech@...hat.com>, Hannes Reinecke <hare@...e.de>
CC:     "linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "Dept-Eng QLogic Storage Upstream" 
        <QLogic-Storage-Upstream@...ium.com>,
        "Mintz, Yuval" <Yuval.Mintz@...ium.com>
Subject: Re: [PATCH v2 3/6] qedi: Add QLogic FastLinQ offload iSCSI driver
 framework.

Hi Hannes,

Please find the response below,

On 11/11/16 10:13 PM, "Hannes Reinecke" <hare@...e.de> wrote:

>On 11/08/2016 07:57 AM, Manish Rangankar wrote:
>> The QLogic FastLinQ Driver for iSCSI (qedi) is the iSCSI specific module
>> for 41000 Series Converged Network Adapters by QLogic.
>>
>> This patch consists of following changes:
>>   - MAINTAINERS Makefile and Kconfig changes for qedi,
>>   - PCI driver registration,
>>   - iSCSI host level initialization,
>>   - Debugfs and log level infrastructure.
>>
>> Signed-off-by: Nilesh Javali <nilesh.javali@...ium.com>
>> Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@...gic.com>
>> Signed-off-by: Chad Dupuis <chad.dupuis@...ium.com>
>> Signed-off-by: Saurav Kashyap <saurav.kashyap@...ium.com>
>> Signed-off-by: Arun Easi <arun.easi@...ium.com>
>> Signed-off-by: Manish Rangankar <manish.rangankar@...ium.com>
>> ---
>>  MAINTAINERS                         |    6 +
>>  drivers/net/ethernet/qlogic/Kconfig |   12 -
>>  drivers/scsi/Kconfig                |    1 +
>>  drivers/scsi/Makefile               |    1 +
>>  drivers/scsi/qedi/Kconfig           |   10 +
>>  drivers/scsi/qedi/Makefile          |    5 +
>>  drivers/scsi/qedi/qedi.h            |  291 +++++++
>>  drivers/scsi/qedi/qedi_dbg.c        |  143 ++++
>>  drivers/scsi/qedi/qedi_dbg.h        |  144 ++++
>>  drivers/scsi/qedi/qedi_debugfs.c    |  244 ++++++
>>  drivers/scsi/qedi/qedi_hsi.h        |   52 ++
>>  drivers/scsi/qedi/qedi_main.c       | 1616
>>+++++++++++++++++++++++++++++++++++
>>  drivers/scsi/qedi/qedi_sysfs.c      |   52 ++
>>  drivers/scsi/qedi/qedi_version.h    |   14 +
>>  14 files changed, 2579 insertions(+), 12 deletions(-)
>>  create mode 100644 drivers/scsi/qedi/Kconfig
>>  create mode 100644 drivers/scsi/qedi/Makefile
>>  create mode 100644 drivers/scsi/qedi/qedi.h
>>  create mode 100644 drivers/scsi/qedi/qedi_dbg.c
>>  create mode 100644 drivers/scsi/qedi/qedi_dbg.h
>>  create mode 100644 drivers/scsi/qedi/qedi_debugfs.c
>>  create mode 100644 drivers/scsi/qedi/qedi_hsi.h
>>  create mode 100644 drivers/scsi/qedi/qedi_main.c
>>  create mode 100644 drivers/scsi/qedi/qedi_sysfs.c
>>  create mode 100644 drivers/scsi/qedi/qedi_version.h
>>
[...]
>>
>> +static enum qed_int_mode qedi_int_mode_to_enum(void)
>> +{
>> +	switch (int_mode) {
>> +	case 0: return QED_INT_MODE_MSIX;
>> +	case 1: return QED_INT_MODE_INTA;
>> +	case 2: return QED_INT_MODE_MSI;
>> +	default:
>> +		QEDI_ERR(NULL, "Unknown qede_int_mode=%08x; "
>> +			 "Defaulting to MSI-x\n", int_mode);
>> +		return QED_INT_MODE_MSIX;
>> +	}
>> +}
>Errm. A per-driver interrupt mode?
>How very curious.
>You surely want to make that per-HBA, right?

This was added for testing purpose, we will remove this code.
 

[...]
>> +static int qedi_request_msix_irq(struct qedi_ctx *qedi)
>> +{
>> +	int i, rc, cpu;
>> +
>> +	cpu = cpumask_first(cpu_online_mask);
>> +	for (i = 0; i < MIN_NUM_CPUS_MSIX(qedi); i++) {
>> +		rc = request_irq(qedi->int_info.msix[i].vector,
>> +				 qedi_msix_handler, 0, "qedi",
>> +				 &qedi->fp_array[i]);
>> +
>> +		if (rc) {
>> +			QEDI_WARN(&qedi->dbg_ctx, "request_irq failed.\n");
>> +			qedi_sync_free_irqs(qedi);
>> +			return rc;
>> +		}
>> +		qedi->int_info.used_cnt++;
>> +		rc = irq_set_affinity_hint(qedi->int_info.msix[i].vector,
>> +					   get_cpu_mask(cpu));
>> +		cpu = cpumask_next(cpu, cpu_online_mask);
>> +	}
>> +
>> +	return 0;
>> +}
>Please use the irq-affinity rework from Christoph here; that'll save you
>the additional msix vectors allocation.

The existing qed* driver(s) and common module (qed) framework is built on
top of the older pci_enable_msix_*() API. The new framework requires
re-work on the existing qed common module API. That would need
co-ordination among other dependent drivers (e.g.: qede network driver,
which is already in the tree). We would prefer to add this as a follow on
(to the initial submission) effort, with additional testing done and
submission co-ordinated across protocol drivers.



Thanks,
Manish

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ