[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 4 Apr 2017 10:51:47 +0300
From: Max Gurtovoy <maxg@...lanox.com>
To: Sagi Grimberg <sagi@...mberg.me>, <linux-rdma@...r.kernel.org>,
<linux-nvme@...ts.infradead.org>, <linux-block@...r.kernel.org>
CC: <netdev@...r.kernel.org>, Saeed Mahameed <saeedm@...lanox.com>,
Or Gerlitz <ogerlitz@...lanox.com>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma
>
> Any feedback is welcome.
Hi Sagi,
the patchset looks good and of course we can add support for more
drivers in the future.
have you run some performance testing with the nvmf initiator ?
>
> Sagi Grimberg (6):
> mlx5: convert to generic pci_alloc_irq_vectors
> mlx5: move affinity hints assignments to generic code
> RDMA/core: expose affinity mappings per completion vector
> mlx5: support ->get_vector_affinity
> block: Add rdma affinity based queue mapping helper
> nvme-rdma: use intelligent affinity based queue mappings
>
> block/Kconfig | 5 +
> block/Makefile | 1 +
> block/blk-mq-rdma.c | 56 +++++++++++
> drivers/infiniband/hw/mlx5/main.c | 10 ++
> drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 5 +-
> drivers/net/ethernet/mellanox/mlx5/core/eq.c | 9 +-
> drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 2 +-
> drivers/net/ethernet/mellanox/mlx5/core/health.c | 2 +-
> drivers/net/ethernet/mellanox/mlx5/core/main.c | 106 +++------------------
> .../net/ethernet/mellanox/mlx5/core/mlx5_core.h | 1 -
> drivers/nvme/host/rdma.c | 13 +++
> include/linux/blk-mq-rdma.h | 10 ++
> include/linux/mlx5/driver.h | 2 -
> include/rdma/ib_verbs.h | 24 +++++
> 14 files changed, 138 insertions(+), 108 deletions(-)
> create mode 100644 block/blk-mq-rdma.c
> create mode 100644 include/linux/blk-mq-rdma.h
>
Powered by blists - more mailing lists