[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240312105103.0f3b6a80@meshulam.tesarici.cz>
Date: Tue, 12 Mar 2024 10:51:03 +0100
From: Petr Tesařík <petr@...arici.cz>
To: Robin Murphy <robin.murphy@....com>
Cc: Will Deacon <will@...nel.org>, Michael Kelley <mhklinux@...look.com>,
Nicolin Chen <nicolinc@...dia.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "kernel-team@...roid.com"
<kernel-team@...roid.com>, "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
Christoph Hellwig <hch@....de>, Marek Szyprowski
<m.szyprowski@...sung.com>, Petr Tesarik
<petr.tesarik1@...wei-partners.com>, Dexuan Cui <decui@...rosoft.com>
Subject: Re: [PATCH v6 4/6] swiotlb: Fix alignment checks when both
allocation and DMA masks are present
On Tue, 12 Mar 2024 09:38:36 +0000
Robin Murphy <robin.murphy@....com> wrote:
> On 2024-03-12 8:52 am, Petr Tesařík wrote:
> > On Mon, 11 Mar 2024 22:49:11 +0000
> > Will Deacon <will@...nel.org> wrote:
> >
> >> On Mon, Mar 11, 2024 at 09:36:10PM +0000, Michael Kelley wrote:
> >>> From: Petr Tesařík <petr@...arici.cz>
> >>>> On Fri, 8 Mar 2024 15:28:27 +0000
> >>>> Will Deacon <will@...nel.org> wrote:
> >>>>> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> >>>>> index c20324fba814..c381a7ed718f 100644
> >>>>> --- a/kernel/dma/swiotlb.c
> >>>>> +++ b/kernel/dma/swiotlb.c
> >>>>> @@ -981,8 +981,7 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
> >>>>> dma_addr_t tbl_dma_addr =
> >>>>> phys_to_dma_unencrypted(dev, pool->start) & boundary_mask;
> >>>>> unsigned long max_slots = get_max_slots(boundary_mask);
> >>>>> - unsigned int iotlb_align_mask =
> >>>>> - dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1);
> >>>>> + unsigned int iotlb_align_mask = dma_get_min_align_mask(dev);
> >>>>> unsigned int nslots = nr_slots(alloc_size), stride;
> >>>>> unsigned int offset = swiotlb_align_offset(dev, orig_addr);
> >>>>> unsigned int index, slots_checked, count = 0, i;
> >>>>> @@ -993,6 +992,14 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
> >>>>> BUG_ON(!nslots);
> >>>>> BUG_ON(area_index >= pool->nareas);
> >>>>>
> >>>>> + /*
> >>>>> + * Ensure that the allocation is at least slot-aligned and update
> >>>>> + * 'iotlb_align_mask' to ignore bits that will be preserved when
> >>>>> + * offsetting into the allocation.
> >>>>> + */
> >>>>> + alloc_align_mask |= (IO_TLB_SIZE - 1);
> >>>>> + iotlb_align_mask &= ~alloc_align_mask;
> >>>>> +
> >>>>
> >>>> I have started writing the KUnit test suite, and the results look
> >>>> incorrect to me for this case.
> >>>>
> >>>> I'm calling swiotlb_tbl_map_single() with:
> >>>>
> >>>> * alloc_align_mask = 0xfff
> >>>> * a device with min_align_mask = 0xfff
> >>>> * the 12 lowest bits of orig_addr are 0xfa0
> >>>>
> >>>> The min_align_mask becomes zero after the masking added by this patch,
> >>>> and the 12 lowest bits of the returned address are 0x7a0, i.e. not
> >>>> equal to 0xfa0.
> >>>
> >>> The address returned by swiotlb_tbl_map_single() is the slot index
> >>> converted to an address, plus the offset modulo the min_align_mask for
> >>> the device. The local variable "offset" in swiotlb_tbl_map_single()
> >>> should be 0xfa0. The slot index should be an even number to meet
> >>> the alloc_align_mask requirement. And the pool->start address should
> >>> be at least page aligned, producing a page-aligned address *before* the
> >>> offset is added. Can you debug which of these isn't true for the case
> >>> you are seeing?
> >>
> >> I was just looking into this, and I think the problem starts because
> >> swiotlb_align_offset() doesn't return the offset modulo the min_align_mask,
> >> but instead returns the offset *into the slot*:
> >>
> >> return addr & dma_get_min_align_mask(dev) & (IO_TLB_SIZE - 1);
> >>
> >> so this presumably lops off bit 11 without adjusting the slot number.
> >
> > Yes. You will never see an offset bigger than IO_TLB_SIZE.
> >
> >> I don't think swiotlb_find_slots() should be handling this though; it's
> >> more about how swiotlb_tbl_map_single() puts the address back together
> >> again.
> >>>> In other words, the min_align_mask constraint is not honored. Of course,
> >>>> given the above values, it is not possible to honor both min_align_mask
> >>>> and alloc_align_mask.
> >>>
> >>> When orig_addr is specified and min_align_mask is set, alloc_align_mask
> >>> governs the address of the _allocated_ space, which is not necessarily the
> >>> returned physical address. The min_align_mask may dictate some
> >>> pre-padding of size "offset" within the allocated space, and the returned
> >>> address is *after* that pre-padding. In this way, both can be honored.
> >>
> >> I agree, modulo the issue with the offset calculation.
> >
> > *sigh*
> >
> > This is exactly what I tried to suggest here:
> >
> > https://lore.kernel.org/linux-iommu/20240301180853.5ac20b27@meshulamtesarici.cz/
> >
> > To which Robin Murphy replied:
> >
> >> That doesn't make sense - a caller asks to map some range of kernel
> >> addresses and they get back a corresponding range of DMA addresses; they
> >> cannot make any reasonable assumptions about DMA addresses *outside*
> >> that range.
> >
> > It sounded like a misunderstanding back then already, but in light of
> > the present findings, should I send the corresponding patch after all?
>
> No, that comment was in reference to the idea of effectively forcing
> alloc_align_mask in order to honour min_align_mask - specifically that
> the reasoning given for it was spurious, but it's clear now it would
> also simply exacerbate this problem.
>
> Simply put, if min_align_mask is specified alone, SWIOTLB can allocate a
> roughly-aligned range of slots such that the bounce offset is always
> less than IO_TLB_SIZE from the start of the allocation; if both
> min_align_mask and alloc_align_mask are specified, then the bounce
> offset may be larger than IO_TLB_SIZE, and SWIOTLB needs to be able to
> handle that correctly. There is still no benefit in forcing the latter
> case to happen more often than it needs to.
So yes, it was a misunderstanding. Here's what I wrote:
I thought about it some more, and I believe I know what should happen
if the first two constraints appear to be mutually exclusive.
I thought it was clear that the two constraints "appear mutually
exclusive" only if both are specified. Admittedly, I also tried to
combine the historic page alignment with the explicit alloc_align_mask
somehow, so that could have caused confusion.
Anyway, let me send the patch and discuss it in a new thread.
Petr T
Powered by blists - more mailing lists