[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cd5df8e0-03d1-8f22-0367-eb7c76bc70e7@opensource.wdc.com>
Date: Thu, 27 Oct 2022 10:18:20 +0900
From: Damien Le Moal <damien.lemoal@...nsource.wdc.com>
To: John Garry <john.garry@...wei.com>, axboe@...nel.dk,
jejb@...ux.ibm.com, martin.petersen@...cle.com,
jinpu.wang@...ud.ionos.com, hare@...e.de, bvanassche@....org,
hch@....de, ming.lei@...hat.com, niklas.cassel@....com
Cc: linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-ide@...r.kernel.org, linux-scsi@...r.kernel.org,
linuxarm@...wei.com
Subject: Re: [PATCH RFC v3 03/22] scsi: core: Implement reserved command
handling
On 10/25/22 19:17, John Garry wrote:
> From: Hannes Reinecke <hare@...e.de>
>
> Quite some drivers are using management commands internally, which
> typically use the same hardware tag pool (ie they are being allocated
> from the same hardware resources) as the 'normal' I/O commands.
> These commands are set aside before allocating the block-mq tag bitmap,
> so they'll never show up as busy in the tag map.
> The block-layer, OTOH, already has 'reserved_tags' to handle precisely
> this situation.
> So this patch adds a new field 'nr_reserved_cmds' to the SCSI host
> template to instruct the block layer to set aside a tag space for these
> management commands by using reserved tags.
>
> Signed-off-by: Hannes Reinecke <hare@...e.de>
> #jpg: Set tag_set->queue_depth = shost->can_queue, and not
> = shost->can_queue + shost->nr_reserved_cmds;
> Signed-off-by: John Garry <john.garry@...wei.com>
> ---
> drivers/scsi/hosts.c | 3 +++
> drivers/scsi/scsi_lib.c | 2 ++
> include/scsi/scsi_host.h | 15 ++++++++++++++-
> 3 files changed, 19 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
> index 12346e2297fd..db89afc37bc9 100644
> --- a/drivers/scsi/hosts.c
> +++ b/drivers/scsi/hosts.c
> @@ -489,6 +489,9 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
> if (sht->virt_boundary_mask)
> shost->virt_boundary_mask = sht->virt_boundary_mask;
>
> + if (sht->nr_reserved_cmds)
> + shost->nr_reserved_cmds = sht->nr_reserved_cmds;
> +
Nit: the if is not really necessary I think. But it does not hurt.
> device_initialize(&shost->shost_gendev);
> dev_set_name(&shost->shost_gendev, "host%d", shost->host_no);
> shost->shost_gendev.bus = &scsi_bus_type;
> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> index 39d4fd124375..a8c4e7c037ae 100644
> --- a/drivers/scsi/scsi_lib.c
> +++ b/drivers/scsi/scsi_lib.c
> @@ -1978,6 +1978,8 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
> tag_set->nr_hw_queues = shost->nr_hw_queues ? : 1;
> tag_set->nr_maps = shost->nr_maps ? : 1;
> tag_set->queue_depth = shost->can_queue;
> + tag_set->reserved_tags = shost->nr_reserved_cmds;
> +
Why the blank line ?
> tag_set->cmd_size = cmd_size;
> tag_set->numa_node = dev_to_node(shost->dma_dev);
> tag_set->flags = BLK_MQ_F_SHOULD_MERGE;
> diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
> index 750ccf126377..91678c77398e 100644
> --- a/include/scsi/scsi_host.h
> +++ b/include/scsi/scsi_host.h
> @@ -360,10 +360,17 @@ struct scsi_host_template {
> /*
> * This determines if we will use a non-interrupt driven
> * or an interrupt driven scheme. It is set to the maximum number
> - * of simultaneous commands a single hw queue in HBA will accept.
> + * of simultaneous commands a single hw queue in HBA will accept
> + * including reserved commands.
> */
> int can_queue;
>
> + /*
> + * This determines how many commands the HBA will set aside
> + * for reserved commands.
> + */
> + int nr_reserved_cmds;
> +
> /*
> * In many instances, especially where disconnect / reconnect are
> * supported, our host also has an ID on the SCSI bus. If this is
> @@ -611,6 +618,12 @@ struct Scsi_Host {
> */
> unsigned nr_hw_queues;
> unsigned nr_maps;
> +
> + /*
> + * Number of reserved commands to allocate, if any.
> + */
> + unsigned int nr_reserved_cmds;
> +
> unsigned active_mode:2;
>
> /*
--
Damien Le Moal
Western Digital Research
Powered by blists - more mailing lists