[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACVXFVOGdvMDSZTUNH3DrXErm1E4LKBjzCFpL3r815JFJbvM4A@mail.gmail.com>
Date: Thu, 22 Aug 2019 18:55:32 +0800
From: Ming Lei <tom.leiming@...il.com>
To: longli@...uxonhyperv.com
Cc: "K. Y. Srinivasan" <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Sasha Levin <sashal@...nel.org>,
"James E.J. Bottomley" <jejb@...ux.ibm.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
linux-hyperv@...r.kernel.org,
Linux SCSI List <linux-scsi@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Long Li <longli@...rosoft.com>
Subject: Re: [PATCH] storvsc: setup 1:1 mapping between hardware queue and CPU queue
On Tue, Aug 20, 2019 at 3:36 AM <longli@...uxonhyperv.com> wrote:
>
> From: Long Li <longli@...rosoft.com>
>
> storvsc doesn't use a dedicated hardware queue for a given CPU queue. When
> issuing I/O, it selects returning CPU (hardware queue) dynamically based on
> vmbus channel usage across all channels.
>
> This patch sets up a 1:1 mapping between hardware queue and CPU queue, thus
> avoiding unnecessary locking at upper layer when issuing I/O.
>
> Signed-off-by: Long Li <longli@...rosoft.com>
> ---
> drivers/scsi/storvsc_drv.c | 16 ++++++++++++++--
> 1 file changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
> index b89269120a2d..26c16d40ec46 100644
> --- a/drivers/scsi/storvsc_drv.c
> +++ b/drivers/scsi/storvsc_drv.c
> @@ -1682,6 +1682,18 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
> return 0;
> }
>
> +static int storvsc_map_queues(struct Scsi_Host *shost)
> +{
> + unsigned int cpu;
> + struct blk_mq_queue_map *qmap = &shost->tag_set.map[HCTX_TYPE_DEFAULT];
> +
> + for_each_possible_cpu(cpu) {
> + qmap->mq_map[cpu] = cpu;
> + }
Block layer provides the helper of blk_mq_map_queues(), so suggest you to use
the default cpu mapping, instead of inventing a new one.
thanks,
Ming Lei
Powered by blists - more mailing lists