[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220120140939.GA11707@lst.de>
Date: Thu, 20 Jan 2022 15:09:39 +0100
From: Christoph Hellwig <hch@....de>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: Daniel Vetter <daniel@...ll.ch>,
Matthew Wilcox <willy@...radead.org>, nvdimm@...ts.linux.dev,
linux-rdma@...r.kernel.org, John Hubbard <jhubbard@...dia.com>,
linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org,
Ming Lei <ming.lei@...hat.com>, linux-block@...r.kernel.org,
linux-mm@...ck.org, netdev@...r.kernel.org,
Joao Martins <joao.m.martins@...cle.com>,
Logan Gunthorpe <logang@...tatee.com>,
Christoph Hellwig <hch@....de>
Subject: Re: Phyr Starter
On Tue, Jan 11, 2022 at 04:26:48PM -0400, Jason Gunthorpe wrote:
> What I did in RDMA was make an iterator rdma_umem_for_each_dma_block()
>
> The driver passes in the page size it wants and the iterator breaks up
> the SGL into that size.
>
> So, eg on a 16k page size system the SGL would be full of 16K stuff,
> but the driver only support 4k and so the iterator hands out 4 pages
> for each SGL entry.
>
> All the drivers use this to build their DMA lists and tables, it works
> really well.
The block layer also has the equivalent functionality by setting the
virt_boundary value in the queue_limits. This is needed for NVMe
PRPs and RDMA drivers.
Powered by blists - more mailing lists