[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48ca2186.fc76.169b8fef63a.Coremail.luferry@163.com>
Date: Tue, 26 Mar 2019 15:55:10 +0800 (CST)
From: luferry <luferry@....com>
To: "Christoph Hellwig" <hch@....de>
Cc: "Jens Axboe" <axboe@...nel.dk>,
"Dongli Zhang" <dongli.zhang@...cle.com>,
"Ming Lei" <ming.lei@...hat.com>, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re:Re: [PATCH] make blk_mq_map_queues more friendly for cpu
topology
At 2019-03-26 15:39:54, "Christoph Hellwig" <hch@....de> wrote:
>Why isn't this using the automatic PCI-level affinity assignment to
>start with?
When enable virtio-blk with multi queues but with only 2 msix-vector.
vp_dev->per_vq_vectors will be false, vp_get_vq_affintity will return NULL directly
so blk_mq_virtio_map_queues will fallback to blk_mq_map_queues.
448 const struct cpumask *vp_get_vq_affinity(struct virtio_device *vdev, int index)
449 {
450 struct virtio_pci_device *vp_dev = to_vp_device(vdev);
451
452 if (!vp_dev->per_vq_vectors ||
453 vp_dev->vqs[index]->msix_vector == VIRTIO_MSI_NO_VECTOR)
454 return NULL;
455
456 return pci_irq_get_affinity(vp_dev->pci_dev,
457 vp_dev->vqs[index]->msix_vector);
458 }
32 int blk_mq_virtio_map_queues(struct blk_mq_queue_map *qmap,
33 struct virtio_device *vdev, int first_vec)
34 {
35 const struct cpumask *mask;
36 unsigned int queue, cpu;
37
38 if (!vdev->config->get_vq_affinity)
39 goto fallback;
40
41 for (queue = 0; queue < qmap->nr_queues; queue++) {
42 mask = vdev->config->get_vq_affinity(vdev, first_vec + queue); //vp_get_vq_affinity return NULL
43 if (!mask)
44 goto fallback;
45
46 for_each_cpu(cpu, mask)
47 qmap->mq_map[cpu] = qmap->queue_offset + queue;
48 }
49
50 return 0;
51 fallback:
52 return blk_mq_map_queues(qmap);
53 }
here is previous discussion
https://patchwork.kernel.org/patch/10865461/
Powered by blists - more mailing lists