[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DM5PR2101MB1030445CF939E0C6A3BE3AC7DCE20@DM5PR2101MB1030.namprd21.prod.outlook.com>
Date: Wed, 24 Jan 2018 22:37:11 +0000
From: "Michael Kelley (EOSG)" <Michael.H.Kelley@...rosoft.com>
To: KY Srinivasan <kys@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
"martin.petersen@...cle.com" <martin.petersen@...cle.com>,
Long Li <longli@...rosoft.com>,
"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"James E . J . Bottomley" <jejb@...ux.vnet.ibm.com>
Subject: RE: [PATCH 1/1] scsi: storvsc: Spread interrupts when picking a
channel for I/O requests
Updated/corrected two email addresses ...
> -----Original Message-----
> From: Michael Kelley (EOSG)
> Sent: Wednesday, January 24, 2018 2:14 PM
> To: KY Srinivasan <kys@...rosoft.com>; Stephen Hemminger <sthemmin@...rosoft.com>;
> martin.petersen@...cle.com; longi@...rosoft.com; JBottomley@...n.com;
> devel@...uxdriverproject.org; linux-kernel@...r.kernel.org; linux-scsi@...r.kernel.org
> Cc: Michael Kelley (EOSG) <Michael.H.Kelley@...rosoft.com>
> Subject: [PATCH 1/1] scsi: storvsc: Spread interrupts when picking a channel for I/O requests
>
> Update the algorithm in storvsc_do_io to look for a channel
> starting with the current CPU + 1 and wrap around (within the
> current NUMA node). This spreads VMbus interrupts more evenly
> across CPUs. Previous code always started with first CPU in
> the current NUMA node, skewing the interrupt load to that CPU.
>
> Signed-off-by: Michael Kelley <mikelley@...rosoft.com>
> ---
> drivers/scsi/storvsc_drv.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
> index e07907d..f3264c4 100644
> --- a/drivers/scsi/storvsc_drv.c
> +++ b/drivers/scsi/storvsc_drv.c
> @@ -1310,7 +1310,8 @@ static int storvsc_do_io(struct hv_device *device,
> */
> cpumask_and(&alloced_mask, &stor_device->alloced_cpus,
> cpumask_of_node(cpu_to_node(q_num)));
> - for_each_cpu(tgt_cpu, &alloced_mask) {
> + for_each_cpu_wrap(tgt_cpu, &alloced_mask,
> + outgoing_channel->target_cpu + 1) {
> if (tgt_cpu != outgoing_channel->target_cpu) {
> outgoing_channel =
> stor_device->stor_chns[tgt_cpu];
> --
> 1.8.3.1
Powered by blists - more mailing lists