lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 11 Aug 2021 09:26:02 +0000 From: "Mi, Dapeng1" <dapeng1.mi@...el.com> To: David Stevens <stevensd@...omium.org>, Robin Murphy <robin.murphy@....com>, Will Deacon <will@...nel.org> CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Tom Murphy <murphyt7@....ie>, "iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org> Subject: RE: [PATCH v3 5/5] dma-iommu: account for min_align_mask > -----Original Message----- > From: iommu <iommu-bounces@...ts.linux-foundation.org> On Behalf Of > David Stevens > Sent: Wednesday, August 11, 2021 10:43 AM > To: Robin Murphy <robin.murphy@....com>; Will Deacon <will@...nel.org> > Cc: linux-kernel@...r.kernel.org; Tom Murphy <murphyt7@....ie>; > iommu@...ts.linux-foundation.org; David Stevens <stevensd@...omium.org> > Subject: [PATCH v3 5/5] dma-iommu: account for min_align_mask > > From: David Stevens <stevensd@...omium.org> > > For devices which set min_align_mask, swiotlb preserves the offset of the > original physical address within that mask. Since __iommu_dma_map > accounts for non-aligned addresses, passing a non-aligned swiotlb address > with the swiotlb aligned size results in the offset being accounted for twice in > the size passed to iommu_map_atomic. The extra page exposed to DMA is > also not cleaned up by __iommu_dma_unmap, since tht at function unmaps > with the correct size. This causes mapping failures if the iova gets reused, > due to collisions in the iommu page tables. > > To fix this, pass the original size to __iommu_dma_map, since that function > already handles alignment. > > Additionally, when swiotlb returns non-aligned addresses, there is padding at > the start of the bounce buffer that needs to be cleared. > > Fixes: 1f221a0d0dbf ("swiotlb: respect min_align_mask") > Signed-off-by: David Stevens <stevensd@...omium.org> > --- > drivers/iommu/dma-iommu.c | 23 ++++++++++++----------- > 1 file changed, 12 insertions(+), 11 deletions(-) > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index 89b689bf801f..ffa7e8ef5db4 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -549,9 +549,8 @@ static dma_addr_t > __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, > struct iommu_domain *domain = iommu_get_dma_domain(dev); > struct iommu_dma_cookie *cookie = domain->iova_cookie; > struct iova_domain *iovad = &cookie->iovad; > - size_t aligned_size = org_size; > - void *padding_start; > - size_t padding_size; > + void *tlb_start; > + size_t aligned_size, iova_off, mapping_end_off; > dma_addr_t iova; > > /* > @@ -566,24 +565,26 @@ static dma_addr_t > __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, > if (phys == DMA_MAPPING_ERROR) > return DMA_MAPPING_ERROR; > > - /* Cleanup the padding area. */ > - padding_start = phys_to_virt(phys); > - padding_size = aligned_size; > + iova_off = iova_offset(iovad, phys); > + tlb_start = phys_to_virt(phys - iova_off); > > + /* Cleanup the padding area. */ > if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && > (dir == DMA_TO_DEVICE || > dir == DMA_BIDIRECTIONAL)) { > - padding_start += org_size; > - padding_size -= org_size; > + mapping_end_off = iova_off + org_size; > + memset(tlb_start, 0, iova_off); > + memset(tlb_start + mapping_end_off, 0, > + aligned_size - mapping_end_off); > + } else { > + memset(tlb_start, 0, aligned_size); > } Nice fix. It's better move the "cleanup ..." comment into if case which looks more accurate.
Powered by blists - more mailing lists