[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZzVZQbZOYhNF08LX@fedora>
Date: Thu, 14 Nov 2024 09:58:25 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Daniel Wagner <wagi@...nel.org>
Cc: Jens Axboe <axboe@...nel.dk>, Bjorn Helgaas <bhelgaas@...gle.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Eugenio PĂ©rez <eperezma@...hat.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
Keith Busch <kbusch@...nel.org>, Christoph Hellwig <hch@....de>,
Sagi Grimberg <sagi@...mberg.me>,
John Garry <john.g.garry@...cle.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Hannes Reinecke <hare@...e.de>, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-pci@...r.kernel.org,
virtualization@...ts.linux.dev, linux-scsi@...r.kernel.org,
megaraidlinux.pdl@...adcom.com, mpi3mr-linuxdrv.pdl@...adcom.com,
MPT-FusionLinux.pdl@...adcom.com, storagedev@...rochip.com,
linux-nvme@...ts.infradead.org
Subject: Re: [PATCH v4 05/10] blk-mq: introduce blk_mq_hctx_map_queues
On Wed, Nov 13, 2024 at 03:26:19PM +0100, Daniel Wagner wrote:
> blk_mq_pci_map_queues and blk_mq_virtio_map_queues will create a CPU to
> hardware queue mapping based on affinity information. These two function
> share common code and only differ on how the affinity information is
> retrieved. Also, those functions are located in the block subsystem
> where it doesn't really fit in. They are virtio and pci subsystem
> specific.
>
> Thus introduce provide a generic mapping function which uses the
> irq_get_affinity callback from bus_type.
>
> Originally idea from Ming Lei <ming.lei@...hat.com>
>
> Signed-off-by: Daniel Wagner <wagi@...nel.org>
> ---
> block/blk-mq-cpumap.c | 43 +++++++++++++++++++++++++++++++++++++++++++
> include/linux/blk-mq.h | 2 ++
> 2 files changed, 45 insertions(+)
>
> diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
> index 9638b25fd52124f0173e968ebdca5f1fe0b42ad9..3506f1c25a02d331d28212a2a97fb269cb21e738 100644
> --- a/block/blk-mq-cpumap.c
> +++ b/block/blk-mq-cpumap.c
> @@ -11,6 +11,7 @@
> #include <linux/smp.h>
> #include <linux/cpu.h>
> #include <linux/group_cpus.h>
> +#include <linux/device/bus.h>
>
> #include "blk.h"
> #include "blk-mq.h"
> @@ -54,3 +55,45 @@ int blk_mq_hw_queue_to_node(struct blk_mq_queue_map *qmap, unsigned int index)
>
> return NUMA_NO_NODE;
> }
> +
> +/**
> + * blk_mq_hctx_map_queues - Create CPU to hardware queue mapping
> + * @qmap: CPU to hardware queue map.
> + * @dev: The device to map queues.
> + * @offset: Queue offset to use for the device.
> + *
> + * Create a CPU to hardware queue mapping in @qmap. The struct bus_type
> + * irq_get_affinity callback will be used to retrieve the affinity.
> + */
> +void blk_mq_hctx_map_queues(struct blk_mq_queue_map *qmap,
Some drivers may not know hctx at all, maybe blk_mq_map_hw_queues()?
> + struct device *dev, unsigned int offset)
> +
> +{
> + const struct cpumask *(*irq_get_affinity)(struct device *dev,
> + unsigned int irq_vec);
> + const struct cpumask *mask;
> + unsigned int queue, cpu;
> +
> + if (dev->driver->irq_get_affinity)
> + irq_get_affinity = dev->driver->irq_get_affinity;
> + else if (dev->bus->irq_get_affinity)
> + irq_get_affinity = dev->bus->irq_get_affinity;
It is one generic API, I think both 'dev->driver' and
'dev->bus' should be validated here.
Thanks,
Ming
Powered by blists - more mailing lists