lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Tue, 11 Jun 2024 12:27:28 -0600
From: Martin Oliveira <martin.oliveira@...eticom.com>
To: linux-rdma@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	linux-mm@...ck.org
Cc: Jason Gunthorpe <jgg@...pe.ca>,
	Leon Romanovsky <leon@...nel.org>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Tejun Heo <tj@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Logan Gunthorpe <logang@...tatee.com>,
	Martin Oliveira <martin.oliveira@...eticom.com>,
	Mike Marciniszyn <mike.marciniszyn@...el.com>,
	Shiraz Saleem <shiraz.saleem@...el.com>,
	Michael Guralnik <michaelgur@...dia.com>,
	Artemy Kovalyov <artemyko@...dia.com>
Subject: [PATCH v2 0/4] Enable P2PDMA in Userspace RDMA

This patch series enables P2PDMA memory to be used in userspace RDMA
transfers. With this series, P2PDMA memory mmaped into userspace (ie.
only NVMe CMBs, at the moment) can then be used with ibv_reg_mr() (or
similar) interfaces. This can be tested by passing a sysfs p2pmem
allocator to the --mmap flag of the perftest tools.

This requires addressing three issues:

* Stop exporting kernfs VMAs with page_mkwrite, which is incompatible
with FOLL_LONGTERM and is redudant since the default fault code has the
same behavior as kernfs_vma_page_mkwrite() (i.e., call
file_update_time()).

* Fix folio_fast_pin_allowed() path to take into account ZONE_DEVICE pages.

* Remove the restriction on FOLL_LONGTREM with FOLL_PCI_P2PDMA which was
initially put in place due to excessive caution with assuming P2PDMA
would have similar problems to fsdax with unmap_mapping_range(). Seeing
P2PDMA only uses unmap_mapping_range() on device unbind and immediately
waits for all page reference counts to go to zero after calling it, it
is actually believed to be safe from reuse and user access faults. See
[1] for more discussion.

This was tested using a Mellanox ConnectX-6 SmartNIC (MT28908 Family),
using the mlx5_core driver, as well as an NVMe CMB.

Thanks,
Martin

[1]: https://lore.kernel.org/linux-mm/87cypuvh2i.fsf@nvdebian.thelocal/T/

--

Changes in v2:
  - Remove page_mkwrite() for all kernfs, instead of creating a
    different vm_ops for p2pdma.

--

Martin Oliveira (4):
  kernfs: remove page_mkwrite() from vm_operations_struct
  mm/gup: handle ZONE_DEVICE pages in folio_fast_pin_allowed()
  mm/gup: allow FOLL_LONGTERM & FOLL_PCI_P2PDMA
  RDMA/umem: add support for P2P RDMA

 drivers/infiniband/core/umem.c |  3 +++
 fs/kernfs/file.c               | 26 +++-----------------------
 mm/gup.c                       |  9 ++++-----
 3 files changed, 10 insertions(+), 28 deletions(-)


base-commit: 83a7eefedc9b56fe7bfeff13b6c7356688ffa670
-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ