lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 8 Mar 2024 17:49:20 +0100
From: Christoph Hellwig <hch@....de>
To: Jason Gunthorpe <jgg@...pe.ca>
Cc: Christoph Hellwig <hch@....de>, Leon Romanovsky <leon@...nel.org>,
	Robin Murphy <robin.murphy@....com>,
	Marek Szyprowski <m.szyprowski@...sung.com>,
	Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
	Chaitanya Kulkarni <chaitanyak@...dia.com>,
	Jonathan Corbet <corbet@....net>, Jens Axboe <axboe@...nel.dk>,
	Keith Busch <kbusch@...nel.org>, Sagi Grimberg <sagi@...mberg.me>,
	Yishai Hadas <yishaih@...dia.com>,
	Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>,
	Kevin Tian <kevin.tian@...el.com>,
	Alex Williamson <alex.williamson@...hat.com>,
	Jérôme Glisse <jglisse@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-block@...r.kernel.org, linux-rdma@...r.kernel.org,
	iommu@...ts.linux.dev, linux-nvme@...ts.infradead.org,
	kvm@...r.kernel.org, linux-mm@...ck.org,
	Bart Van Assche <bvanassche@....org>,
	Damien Le Moal <damien.lemoal@...nsource.wdc.com>,
	Amir Goldstein <amir73il@...il.com>,
	"josef@...icpanda.com" <josef@...icpanda.com>,
	"Martin K. Petersen" <martin.petersen@...cle.com>,
	"daniel@...earbox.net" <daniel@...earbox.net>,
	Dan Williams <dan.j.williams@...el.com>,
	"jack@...e.com" <jack@...e.com>, Zhu Yanjun <zyjzyj2000@...il.com>
Subject: Re: [RFC RESEND 00/16] Split IOMMU DMA mapping operation to two
 steps

On Thu, Mar 07, 2024 at 05:01:16PM -0400, Jason Gunthorpe wrote:
> > 
> > It's just kinda hard to do.  For aligned IOMMU mapping you'd only
> > have one dma_addr_t mappings (or maybe a few if P2P regions are
> > involved), so this probably doesn't matter.  For direct mappings
> > you'd have a few, but maybe the better answer is to use THP
> > more aggressively and reduce the number of segments.
> 
> Right, those things have all been done. 100GB of huge pages is still
> using a fair amount of memory for storing dma_addr_t's.
> 
> It is hard to do perfectly, but I think it is not so bad if we focus
> on the direct only case and simple systems that can exclude swiotlb
> early on.

Even with direct mappings only we still need to take care of
cache synchronization.

> > If all flows includes multiple non-coalesced regions that just makes
> > things very complicated, and that's exactly what I'd want to avoid.
> 
> I don't see how to avoid it unless we say RDMA shouldn't use this API,
> which is kind of the whole point from my perspective..

The DMA API callers really need to know what is P2P or not for
various reasons.  And they should generally have that information
available, either from pin_user_pages that needs to special case
it or from the in-kernel I/O submitter that build it from P2P and
normal memory.

> Sure, 3 SGL entries is fine, that isn't what I'm pointing at
> 
> I'm saying that today if you give such a scatterlist to dma_map_sg()
> it scans it and computes the IOVA space need, allocates one IOVA
> space, then subdivides that single space up into the 3 HW SGLs you
> show.
> 
> If you don't preserve that then we are calling, 4k at a time, a
> dma_map_page() which is not anywhere close to the same outcome as what
> dma_map_sg did. I may not get contiguous IOVA, I may not get 3 SGLs,
> and we call into the IOVA allocator a huge number of times.

Again, your callers must know what is a P2P region and what is not.
I don't think it is a hard burdern to do mappings at that granularity,
and we can encapsulate this in nice helpes for say the block layer
and pin_user_pages callers to start.

> 
> It needs to work following the same basic structure of dma_map_sg,
> unfolding that logic into helpers so that the driver can provide
> the data structure:
> 
>  - Scan the io ranges and figure out how much IOVA needed
>    (dma_io_summarize_range)

That is in general a function of the upper layer and not the DMA code.

>  - Allocate the IOVA (dma_init_io)

And this step is only needed for the iommu case.

> > That's why I really just want 2 cases.  If the caller guarantees the
> > range is coalescable and there is an IOMMU use the iommu-API like
> > API, else just iter over map_single/page.
> 
> But how does the caller even know if it is coalescable? Other than the
> trivial case of a single CPU range, that is a complicated detail based
> on what pages are inside the range combined with the capability of the
> device doing DMA. I don't see a simple way for the caller to figure
> this out. You need to sweep every page and collect some information on
> it. The above is to abstract that detail.

dma_get_merge_boundary already provides this information in terms
of the device capabilities.  And given that the callers knows what
is P2P and what is not we have all the information that is needed.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ