[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <94D0CD8314A33A4D9D801C0FE68B402959408BB5@G4W3202.americas.hpqcorp.net>
Date: Thu, 11 Dec 2014 01:53:02 +0000
From: "Elliott, Robert (Server Storage)" <Elliott@...com>
To: Sreekanth Reddy <sreekanth.reddy@...gotech.com>,
"martin.petersen@...cle.com" <martin.petersen@...cle.com>,
"jejb@...nel.org" <jejb@...nel.org>,
"hch@...radead.org" <hch@...radead.org>
CC: "linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"JBottomley@...allels.com" <JBottomley@...allels.com>,
"Sathya.Prakash@...gotech.com" <Sathya.Prakash@...gotech.com>,
"Nagalakshmi.Nandigama@...gotech.com"
<Nagalakshmi.Nandigama@...gotech.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH 09/22] [SCSI] mpt2sas, mpt3sas: Added a support to set
cpu affinity for each MSIX vector enabled by the HBA
> -----Original Message-----
> From: linux-scsi-owner@...r.kernel.org [mailto:linux-scsi-
> owner@...r.kernel.org] On Behalf Of Sreekanth Reddy
> Sent: Tuesday, 09 December, 2014 6:17 AM
> To: martin.petersen@...cle.com; jejb@...nel.org; hch@...radead.org
...
> Change_set:
> 1. Added affinity_hint varable of type cpumask_var_t in adapter_reply_queue
> structure. And allocated a memory for this varable by calling
> zalloc_cpumask_var.
> 2. Call the API irq_set_affinity_hint for each MSIx vector to affiniate it
> with calculated cpus at driver inilization time.
> 3. While freeing the MSIX vector, call this same API to release the cpu
> affinity mask
> for each MSIx vector by providing the NULL value in cpumask argument.
> 4. then call the free_cpumask_var API to free the memory allocated in step 2.
>
...
> diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c
> b/drivers/scsi/mpt3sas/mpt3sas_base.c
> index 1560115..f0f8ba0 100644
> --- a/drivers/scsi/mpt3sas/mpt3sas_base.c
> +++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
...
> @@ -1609,6 +1611,10 @@ _base_request_irq(struct MPT3SAS_ADAPTER *ioc, u8
> index, u32 vector)
> reply_q->ioc = ioc;
> reply_q->msix_index = index;
> reply_q->vector = vector;
> +
> + if (!zalloc_cpumask_var(&reply_q->affinity_hint, GFP_KERNEL))
> + return -ENOMEM;
I think this will create the problem Alex Thorlton just reported
with lpfc on a system with a huge number (6144) of CPUs.
See this thread:
[BUG] kzalloc overflow in lpfc driver on 6k core system
---
Rob Elliott HP Server Storage
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists