[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190411181314.19465-1-jglisse@redhat.com>
Date: Thu, 11 Apr 2019 14:13:13 -0400
From: jglisse@...hat.com
To: linux-kernel@...r.kernel.org
Cc: Jérôme Glisse <jglisse@...hat.com>,
linux-rdma@...r.kernel.org, Jason Gunthorpe <jgg@...lanox.com>,
Leon Romanovsky <leonro@...lanox.com>,
Doug Ledford <dledford@...hat.com>,
Artemy Kovalyov <artemyko@...lanox.com>,
Moni Shoua <monis@...lanox.com>,
Mike Marciniszyn <mike.marciniszyn@...el.com>,
Kaike Wan <kaike.wan@...el.com>,
Dennis Dalessandro <dennis.dalessandro@...el.com>
Subject: [PATCH v4 0/1] Use HMM for ODP v4
From: Jérôme Glisse <jglisse@...hat.com>
Just fixed Kconfig and build when ODP was not enabled, other than that
this is the same as v3. Here is previous cover letter:
Git tree with all prerequisite:
https://cgit.freedesktop.org/~glisse/linux/log/?h=rdma-odp-hmm-v4
This patchset convert RDMA ODP to use HMM underneath this is motivated
by stronger code sharing for same feature (share virtual memory SVM or
Share Virtual Address SVA) and also stronger integration with mm code to
achieve that. It depends on HMM patchset posted for inclusion in 5.2 [2]
and [3].
It has been tested with pingpong test with -o and others flags to test
different size/features associated with ODP.
Moreover they are some features of HMM in the works like peer to peer
support, fast CPU page table snapshot, fast IOMMU mapping update ...
It will be easier for RDMA devices with ODP to leverage those if they
use HMM underneath.
Quick summary of what HMM is:
HMM is a toolbox for device driver to implement software support for
Share Virtual Memory (SVM). Not only it provides helpers to mirror a
process address space on a device (hmm_mirror). It also provides
helper to allow to use device memory to back regular valid virtual
address of a process (any valid mmap that is not an mmap of a device
or a DAX mapping). They are two kinds of device memory. Private memory
that is not accessible to CPU because it does not have all the expected
properties (this is for all PCIE devices) or public memory which can
also be access by CPU without restriction (with OpenCAPI or CCIX or
similar cache-coherent and atomic inter-connect).
Device driver can use each of HMM tools separatly. You do not have to
use all the tools it provides.
For RDMA device i do not expect a need to use the device memory support
of HMM. This device memory support is geared toward accelerator like GPU.
You can find a branch [1] with all the prerequisite in. This patch is on
top of rdma-next with the HMM patchset [2] and mmu notifier patchset [3]
applied on top of it.
[1] https://cgit.freedesktop.org/~glisse/linux/log/?h=rdma-odp-hmm-v4
[2] https://lkml.org/lkml/2019/4/3/1032
[3] https://lkml.org/lkml/2019/3/26/900
Cc: linux-rdma@...r.kernel.org
Cc: Jason Gunthorpe <jgg@...lanox.com>
Cc: Leon Romanovsky <leonro@...lanox.com>
Cc: Doug Ledford <dledford@...hat.com>
Cc: Artemy Kovalyov <artemyko@...lanox.com>
Cc: Moni Shoua <monis@...lanox.com>
Cc: Mike Marciniszyn <mike.marciniszyn@...el.com>
Cc: Kaike Wan <kaike.wan@...el.com>
Cc: Dennis Dalessandro <dennis.dalessandro@...el.com>
Jérôme Glisse (1):
RDMA/odp: convert to use HMM for ODP v4
drivers/infiniband/Kconfig | 3 +-
drivers/infiniband/core/umem_odp.c | 499 ++++++++---------------------
drivers/infiniband/hw/mlx5/mem.c | 20 +-
drivers/infiniband/hw/mlx5/mr.c | 2 +-
drivers/infiniband/hw/mlx5/odp.c | 106 +++---
include/rdma/ib_umem_odp.h | 49 ++-
6 files changed, 231 insertions(+), 448 deletions(-)
--
2.20.1
Powered by blists - more mailing lists