[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<SN6PR02MB4157C66C31129D09E19ACC3BD4232@SN6PR02MB4157.namprd02.prod.outlook.com>
Date: Mon, 4 Mar 2024 18:08:07 +0000
From: Michael Kelley <mhklinux@...look.com>
To: Robin Murphy <robin.murphy@....com>, Petr Tesařík
<petr@...arici.cz>
CC: Christoph Hellwig <hch@....de>, Will Deacon <will@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Petr Tesarik
<petr.tesarik1@...wei-partners.com>, "kernel-team@...roid.com"
<kernel-team@...roid.com>, "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
Marek Szyprowski <m.szyprowski@...sung.com>, Dexuan Cui
<decui@...rosoft.com>, Nicolin Chen <nicolinc@...dia.com>
Subject: RE: [PATCH v5 6/6] swiotlb: Remove pointless stride adjustment for
allocations >= PAGE_SIZE
From: Robin Murphy <robin.murphy@....com> Sent: Monday, March 4, 2024 9:11 AM
>
> On 04/03/2024 4:04 pm, Michael Kelley wrote:
> > From: Petr Tesařík <petr@...arici.cz> Sent: Monday, March 4, 2024 7:55 AM
> >>
> >> On Mon, 4 Mar 2024 13:37:56 +0000
> >> Robin Murphy <robin.murphy@....com> wrote:
> >>
> >>> On 04/03/2024 11:00 am, Petr Tesařík wrote:
> >>> [...]
> >>>
> >>>>> For #3, alloc_align_mask specifies the required alignment. No
> >>>>> pre-padding is needed. Per earlier comments from Robin[1],
> >>>>> it's reasonable to assume alloc_align_mask (i.e., the granule)
> >>>>> is >= IO_TLB_SIZE. The original address is not relevant in
> >>>>> determining the alignment, and the historical page alignment
> >>>>> requirement does not apply since alloc_align_mask explicitly
> >>>>> states the alignment.
> >>>
> >>> FWIW I'm also starting to wonder about getting rid of the alloc_size
> >>> argument and just have SWIOTLB round the end address up to
> >>> alloc_align_mask itself as part of all these calculations. Seems like it
> >>> could potentially end up a little simpler, maybe?
> >
> > Yes, I was thinking exactly this. But my reasoning was to solve the
> > bug in #4 that I previously pointed out. If iommu_dma_map_page()
> > does *not* do
> >
> > aligned_size = iova_align(iovad, size);
> >
> > but swiotlb_tbl_map_single() rounds up the size based on
> > alloc_align_mask *after* adding the offset modulo
> > min_align_mask, then the rounded-up size won't exceed IO_TLB_SIZE,
> > regardless of which bits are set in orig_addr.
>
> Ah, neat, I had a gut feeling that something like that might also fall
> out, I just didn't feel like working through the details to see if
> "simpler" could lead to "objectively better" :)
>
> I guess at worst we might also need to pass an alloc_align_mask to
> swiotlb_max_mapping_size() as well, but even that's not necessarily a
> bad thing if it keeps the equivalent calculations close together within
> SWIOTLB and makes things more robust overall.
>
I haven't seen a reason to incorporate alloc_align_mask into
swiotlb_max_mapping_size(). But let me confirm my
understanding of this scenario:
1. The requested size without any rounding is 11K (just an example).
2. The original address starts at offset 7K modulo a 64K granule.
3. The min_align_mask for the device is 4K - 1
4. Then it's OK for swiotlb to return an address at offset 3K modulo
the 64K granule. Such an address meets the min_align_mask, even
though it is a different offset in the granule.
5. swiotlb will allocate 3K of pre-padding for the min_align_mask
requirement. If swiotlb also does the rounding up, it would take
the original 11K, add 3K of pre-padding, then round up to 64K and
effectively allocate 50K of post-padding.
6. The zeroing of the pre-padding and post-padding is messed
up in iommu_dma_map_page(), but that's a separate issue.
Assuming my understanding is correct, this works correctly
when the originally requested size is as large as
swiotlb_max_mapping_size(). It works because the
pre-padding is never more than min_align_mask, and
swiotlb_max_mapping_size() takes that pre-padding into
account.
I would normally propose a patch to implement these changes,
but I don't have hardware on which to test IOMMU code paths.
But I'll write an untested patch if that would be a helpful starting
point for someone else to test.
FYI, I'm treating this discussion as separate from Will's patch
set. It's something that could be done as a follow-on set.
Michael
Powered by blists - more mailing lists