lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 23 Sep 2016 15:21:14 -0700
From:   Sagi Grimberg <sagi@...mberg.me>
To:     Christoph Hellwig <hch@....de>, axboe@...com, tglx@...utronix.de
Cc:     agordeev@...hat.com, keith.busch@...el.com,
        linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 11/13] nvme: switch to use pci_alloc_irq_vectors



On 14/09/16 07:18, Christoph Hellwig wrote:
> Use the new helper to automatically select the right interrupt type, as
> well as to use the automatic interupt affinity assignment.

Patch title and the change description are a little short IMO to
describe what is going on here (need the blk-mq side too).

I'd also think it would be better to split this to 2 patches but
really not a must...

> +static int nvme_pci_map_queues(struct blk_mq_tag_set *set)
> +{
> +	struct nvme_dev *dev = set->driver_data;
> +
> +	return blk_mq_pci_map_queues(set, to_pci_dev(dev->dev));
> +}
> +

Question: is using pci_alloc_irq_vectors() obligated for
supplying blk-mq with the device affinity mask?

If I do this completely-untested [1] what will happen?

[1]:
--
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 8d2875b4c56d..76693d406efe 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1518,6 +1518,14 @@ static void nvme_rdma_complete_rq(struct request *rq)
         blk_mq_end_request(rq, error);
  }

+static int nvme_rdma_map_queues(struct blk_mq_tag_set *set)
+{
+       struct nvme_rdma_ctrl *ctrl = set->driver_data;
+       struct device *dev = ctrl->device->dev.dma_device;
+
+       return blk_mq_pci_map_queues(set, to_pci_dev(dev));
+}
+
  static struct blk_mq_ops nvme_rdma_mq_ops = {
         .queue_rq       = nvme_rdma_queue_rq,
         .complete       = nvme_rdma_complete_rq,
@@ -1528,6 +1536,7 @@ static struct blk_mq_ops nvme_rdma_mq_ops = {
         .init_hctx      = nvme_rdma_init_hctx,
         .poll           = nvme_rdma_poll,
         .timeout        = nvme_rdma_timeout,
+       .map_queues     = nvme_rdma_map_queues,
  };

  static struct blk_mq_ops nvme_rdma_admin_mq_ops = {
--

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ