[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200909183848.GA950693@nvidia.com>
Date: Wed, 9 Sep 2020 15:38:48 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Adit Ranadive <aditr@...are.com>, Ariel Elior <aelior@...vell.com>,
Potnuri Bharat Teja <bharat@...lsio.com>,
"David S. Miller" <davem@...emloft.net>,
Devesh Sharma <devesh.sharma@...adcom.com>,
"Doug Ledford" <dledford@...hat.com>,
Faisal Latif <faisal.latif@...el.com>,
"Gal Pressman" <galpress@...zon.com>,
<GR-everest-linux-l2@...vell.com>,
"Wei Hu(Xavier)" <huwei87@...ilicon.com>,
Jakub Kicinski <kuba@...nel.org>,
"Leon Romanovsky" <leon@...nel.org>, <linux-rdma@...r.kernel.org>,
Weihang Li <liweihang@...wei.com>,
Michal Kalderon <mkalderon@...vell.com>,
"Naresh Kumar PBS" <nareshkumar.pbs@...adcom.com>,
<netdev@...r.kernel.org>, Lijun Ou <oulijun@...wei.com>,
VMware PV-Drivers <pv-drivers@...are.com>,
"Selvin Xavier" <selvin.xavier@...adcom.com>,
Yossi Leybovich <sleybo@...zon.com>,
Somnath Kotur <somnath.kotur@...adcom.com>,
Sriharsha Basavapatna <sriharsha.basavapatna@...adcom.com>,
Yishai Hadas <yishaih@...dia.com>
CC: Firas JahJah <firasj@...zon.com>,
Henry Orosco <henry.orosco@...el.com>,
Leon Romanovsky <leonro@...dia.com>,
"Michael J. Ruhl" <michael.j.ruhl@...el.com>,
Michal Kalderon <michal.kalderon@...vell.com>,
Miguel Ojeda <miguel.ojeda.sandonis@...il.com>,
Shiraz Saleem <shiraz.saleem@...el.com>
Subject: Re: [PATCH v2 00/17] RDMA: Improve use of umem in DMA drivers
On Fri, Sep 04, 2020 at 07:41:41PM -0300, Jason Gunthorpe wrote:
> Most RDMA drivers rely on a linear table of DMA addresses organized in
> some device specific page size.
>
> For a while now the core code has had the rdma_for_each_block() SG
> iterator to help break a umem into DMA blocks for use in the device lists.
>
> Improve on this by adding rdma_umem_for_each_dma_block(),
> ib_umem_dma_offset() and ib_umem_num_dma_blocks().
>
> Replace open codings, or calls to fixed PAGE_SIZE APIs, in most of the
> drivers with one of the above APIs.
>
> Get rid of the really weird and duplicative ib_umem_page_count().
>
> Fix two problems with ib_umem_find_best_pgsz(), and several problems
> related to computing the wrong DMA list length if IOVA != umem->address.
>
> At this point many of the driver have a clear path to call
> ib_umem_find_best_pgsz() and replace hardcoded PAGE_SIZE or PAGE_SHIFT
> values when constructing their DMA lists.
>
> This is the first series in an effort to modernize the umem usage in all
> the DMA drivers.
>
> v1: https://lore.kernel.org/r/0-v1-00f59ce24f1f+19f50-umem_1_jgg@nvidia.com
> v2:
> - Fix ib_umem_find_best_pgsz() to use IOVA not umem->addr
> - Fix ib_umem_num_dma_blocks() to use IOVA not umem->addr
> - Two new patches to remove wrong open coded versions of
> ib_umem_num_dma_blocks() from EFA and i40iw
> - Redo the mlx4 ib_umem_num_dma_blocks() to do less and be safer
> until the whole thing can be moved to ib_umem_find_best_pgsz()
> - Two new patches to delete calls to ib_umem_offset() in qedr and
> ocrdma
>
> Signed-off-by: Jason Gunthorpe <jgg@...dia.com>
>
> Jason Gunthorpe (17):
> RDMA/umem: Fix ib_umem_find_best_pgsz() for mappings that cross a page
> boundary
> RDMA/umem: Prevent small pages from being returned by
> ib_umem_find_best_pgsz()
> RDMA/umem: Use simpler logic for ib_umem_find_best_pgsz()
> RDMA/umem: Add rdma_umem_for_each_dma_block()
> RDMA/umem: Replace for_each_sg_dma_page with
> rdma_umem_for_each_dma_block
> RDMA/umem: Split ib_umem_num_pages() into ib_umem_num_dma_blocks()
> RDMA/efa: Use ib_umem_num_dma_pages()
> RDMA/i40iw: Use ib_umem_num_dma_pages()
> RDMA/qedr: Use rdma_umem_for_each_dma_block() instead of open-coding
> RDMA/qedr: Use ib_umem_num_dma_blocks() instead of
> ib_umem_page_count()
> RDMA/bnxt: Do not use ib_umem_page_count() or ib_umem_num_pages()
> RDMA/hns: Use ib_umem_num_dma_blocks() instead of opencoding
> RDMA/ocrdma: Use ib_umem_num_dma_blocks() instead of
> ib_umem_page_count()
> RDMA/pvrdma: Use ib_umem_num_dma_blocks() instead of
> ib_umem_page_count()
> RDMA/mlx4: Use ib_umem_num_dma_blocks()
> RDMA/qedr: Remove fbo and zbva from the MR
> RDMA/ocrdma: Remove fbo from MR
Applied to for-next with Leon's note. Thanks everyone
Jason
Powered by blists - more mailing lists