[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID:
<SN6PR02MB415727E61B5295C259CCB268D4512@SN6PR02MB4157.namprd02.prod.outlook.com>
Date: Mon, 19 Feb 2024 04:05:21 +0000
From: Michael Kelley <mhklinux@...look.com>
To: Nicolin Chen <nicolinc@...dia.com>, Will Deacon <will@...nel.org>
CC: "sagi@...mberg.me" <sagi@...mberg.me>, "hch@....de" <hch@....de>,
"axboe@...nel.dk" <axboe@...nel.dk>, "kbusch@...nel.org" <kbusch@...nel.org>,
"joro@...tes.org" <joro@...tes.org>, "robin.murphy@....com"
<robin.murphy@....com>, "jgg@...dia.com" <jgg@...dia.com>,
"linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>, "murphyt7@....ie"
<murphyt7@....ie>, "baolu.lu@...ux.intel.com" <baolu.lu@...ux.intel.com>
Subject: RE: [PATCH v1 0/2] nvme-pci: Fix dma-iommu mapping failures when
PAGE_SIZE=64KB
From: Nicolin Chen <nicolinc@...dia.com> Sent: Friday, February 16, 2024 9:20 PM
>
> Hi Will,
>
> On Fri, Feb 16, 2024 at 04:13:12PM +0000, Will Deacon wrote:
> > On Thu, Feb 15, 2024 at 04:26:23PM -0800, Nicolin Chen wrote:
> > > On Thu, Feb 15, 2024 at 04:35:45PM +0000, Will Deacon wrote:
> > > > On Thu, Feb 15, 2024 at 02:22:09PM +0000, Will Deacon wrote:
> > > > > On Wed, Feb 14, 2024 at 11:57:32AM -0800, Nicolin Chen wrote:
> > > > > > On Wed, Feb 14, 2024 at 04:41:38PM +0000, Will Deacon wrote:
> > > > > > > On Tue, Feb 13, 2024 at 01:53:55PM -0800, Nicolin Chen wrote:
> > > > > > And it seems to get worse, as even a 64KB mapping is failing:
> > > > > > [ 0.239821] nvme 0000:00:01.0: swiotlb buffer is full (sz: 65536 bytes), total 32768 (slots), used 0 (slots)
> > > > > >
> > > > > > With a printk, I found the iotlb_align_mask isn't correct:
> > > > > > swiotlb_area_find_slots:alloc_align_mask 0xffff, iotlb_align_mask 0x800
> > > > > >
> > > > > > But fixing the iotlb_align_mask to 0x7ff still fails the 64KB
> > > > > > mapping..
> > > > >
> > > > > Hmm. A mask of 0x7ff doesn't make a lot of sense given that the slabs
> > > > > are 2KiB aligned. I'll try plugging in some of the constants you have
> > > > > here, as something definitely isn't right...
> > > >
> > > > Sorry, another ask: please can you print 'orig_addr' in the case of the
> > > > failing allocation?
> > >
> > > I added nvme_print_sgl() in the nvme-pci driver before its
> > > dma_map_sgtable() call, so the orig_addr isn't aligned with
> > > PAGE_SIZE=64K or NVME_CTRL_PAGE_SIZE=4K:
> > > sg[0] phys_addr:0x0000000105774600 offset:17920 length:512 dma_address:0x0000000000000000 dma_length:0
> > >
> > > Also attaching some verbose logs, in case you'd like to check:
> > > nvme 0000:00:01.0: swiotlb_area_find_slots: dma_get_min_align_mask 0xfff, IO_TLB_SIZE 0xfffff7ff
> > > nvme 0000:00:01.0: swiotlb_area_find_slots: alloc_align_mask 0xffff, iotlb_align_mask 0x7ff
> > > nvme 0000:00:01.0: swiotlb_area_find_slots: stride 0x20, max 0xffff
> > > nvme 0000:00:01.0: swiotlb_area_find_slots: tlb_addr=0xbd830000, iotlb_align_mask=0x7ff, alloc_align_mask=0xffff
> > > => nvme 0000:00:01.0: swiotlb_area_find_slots: orig_addr=0x105774600, iotlb_align_mask=0x7ff
> >
> > With my patches, I think 'iotlb_align_mask' will be 0x800 here, so this
>
> Oops, my bad. I forgot to revert the part that I mentioned in
> my previous reply.
>
> > particular allocation might be alright, however I think I'm starting to
> > see the wider problem. The IOMMU code is asking for a 64k-aligned
> > allocation so that it can map it safely, but at the same time
> > dma_get_min_align_mask() is asking for congruence in the 4k NVME page
> > offset. Now, because we're going to allocate a 64k-aligned mapping and
> > offset it, I think the NVME alignment will just fall out in the wash and
> > checking the 'orig_addr' (which includes the offset) is wrong.
> >
> > So perhaps this diff (which I'm sadly not able to test) will help? You'll
> > want to apply it on top of my other patches. The idea is to ignore the
> > bits of 'orig_addr' which will be aligned automatically by offseting from
> > the aligned allocation. I fixed the max() thing too, although that's only
> > an issue for older kernels.
>
> Yea, I tested all 4 patches. They still failed at some large
> mapping, until I added on top of them my PATCH-1 implementing
> the max_mapping_size op. IOW, with your patches it looks like
> 252KB max_mapping_size is working :)
>
> Though we seem to have a solution now, I hope we can make it
> applicable to older kernels too. The mapping failure on arm64
> with PAGE_SIZE=64KB looks like a regression to me, since dma-
> iommu started to use swiotlb bounce buffer.
>
> Thanks
> Nicolin
>
> > --->8
> >
> > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> > index 283eea33dd22..4a000d97f568 100644
> > --- a/kernel/dma/swiotlb.c
> > +++ b/kernel/dma/swiotlb.c
> > @@ -981,8 +981,7 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
> > dma_addr_t tbl_dma_addr =
> > phys_to_dma_unencrypted(dev, pool->start) & boundary_mask;
> > unsigned long max_slots = get_max_slots(boundary_mask);
> > - unsigned int iotlb_align_mask =
> > - dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1);
> > + unsigned int iotlb_align_mask = dma_get_min_align_mask(dev);
> > unsigned int nslots = nr_slots(alloc_size), stride;
> > unsigned int offset = swiotlb_align_offset(dev, orig_addr);
> > unsigned int index, slots_checked, count = 0, i;
> > @@ -993,6 +992,9 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
> > BUG_ON(!nslots);
> > BUG_ON(area_index >= pool->nareas);
> >
> > + alloc_align_mask |= (IO_TLB_SIZE - 1);
> > + iotlb_align_mask &= ~alloc_align_mask;
> > +
> > /*
> > * For mappings with an alignment requirement don't bother looping to
> > * unaligned slots once we found an aligned one.
> > @@ -1004,7 +1006,7 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
> > * allocations.
> > */
> > if (alloc_size >= PAGE_SIZE)
> > - stride = max(stride, PAGE_SHIFT - IO_TLB_SHIFT + 1);
> > + stride = umax(stride, PAGE_SHIFT - IO_TLB_SHIFT + 1);
> >
> > spin_lock_irqsave(&area->lock, flags);
> > if (unlikely(nslots > pool->area_nslabs - area->used))
> >
This thread prompted me to test another scenario similar to Nicolin's, and
on the DMA direct map path I saw failures like Nicolin's. Will's Patch 1/3
fixes my scenario, though I'm still looking at the details to convince myself
that it is fully correct, even with the above tweak.
Here's my scenario:
* ARM64 VM with 8 vCPUs running in the Azure public cloud
* Linux 6.8-rc4 kernel built with 64 Kbyte pages
* Using the Azure/Hyper-V synthetic storage driver (drivers/scsi/storvsc_drv.c).
This driver sets dma_min_align_mask to 4 Kbytes, because the DMA
descriptors sent to the Hyper-V host are always in 4K units, regardless of
the guest page size.
* Running with the standard 64 Mbytes swiotlb size. Added swiotlb=force
on the kernel boot line, to see if it could handle the scenario. This
simulates what CoCo VMs do on the x86 side. We don't have arm64
CoCo VMs, so the scenario is artificial for the moment. But it ought
to work.
* There's no vIOMMU in the VM, so all DMA requests go through the
DMA direct map path, not the IOMMU path (which is Nicolin's scenario)
In my scenario, the block layer generates I/O requests up to 252 Kbytes in
size. But with the unmodified 6.8-rc4 code, these requests fail to be mapped
in the swiotlb even though there's plenty of space available. Here's why:
The 252 Kbyte request size is determined by the block layer, ultimately
based on the value returned by swiotlb_max_mapping_size(), which is
based on the dma_min_align_mask, which is 4K in my scenario. But the
current upstream code in swiotlb_search_pool_area() adds ~PAGE_MASK
into the iotlb_align_mask, which increases the alignment requirement.
With the increased alignment, the offset in orig_addr is likely larger than
4 Kbytes, and that offset plus 252 Kbytes no longer fits in the 256 Kbyte
swiotlb limit. swiotlb_search_pool_area() can never find a slot with
enough space. (We could easily add a WARN_ON in
swiotlb_search_pool_area() to catch such an occurrence.)
Will's Patch 1/3 fixes this problem by using PAGE_SIZE/PAGE_SHIFT
only to compute the stride, and does not add ~PAGE_MASK into
iotlb_align_mask. In my scenario with Will's patch, iotlb_align_mask
expresses a 4 Kbyte alignment requirement, and
swiotlb_search_pool_area() succeeds for a 252 Kbyte request.
But what about the alloc_align_mask argument to
swiotlb_search_pool_area()? If it increases the alignment beyond
what swiotlb_max_mapping_size() is based on, the same problem
will occur. This is what happens in Nicolin's scenario when the
NVMe driver sets dma_min_align_mask to 4K, but the dma-iommu
code passes a larger value in alloc_align_mask. The later version of
Nicolin's Patch 1/2 that uses iovad.granule in
iommu_dma_max_mapping_size() solves the immediate problem.
But something still seems messed up, because the value from
iommu_dma_max_mapping_size() must be coordinated with the
value passed as the alloc_align_mask argument.
Will's 3-patch series is based on a different scenario -- the
swiotlb_alloc() case. Patch 3 of his series also uses the
alloc_align_mask argument, and I haven't thought through
all the implications.
Finally, there's this comment in swiotlb_search_pool_area():
/*
* For allocations of PAGE_SIZE or larger only look for page aligned
* allocations.
*/
The comment has been around for a while, and it confused me.
It seems to apply only for the swiotlb_alloc() case when
orig_addr is zero. From what I can tell, allocations for DMA
mapping purposes don't do such alignment. Or is there some
other meaning to that comment that I'm missing?
Michael
Powered by blists - more mailing lists