[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210830045925.4163412-1-stevensd@google.com>
Date: Mon, 30 Aug 2021 13:59:18 +0900
From: David Stevens <stevensd@...omium.org>
To: Robin Murphy <robin.murphy@....com>, Christoph Hellwig <hch@....de>
Cc: Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
Lu Baolu <baolu.lu@...ux.intel.com>,
Tom Murphy <murphyt7@....ie>, Rajat Jain <rajatja@...gle.com>,
iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org,
David Stevens <stevensd@...gle.com>
Subject: [PATCH v7 0/7] Fixes for dma-iommu swiotlb bounce buffers
This patch set includes various fixes for dma-iommu's swiotlb bounce
buffers for untrusted devices.
The min_align_mask issue was found when running fio on an untrusted nvme
device with bs=512. The other issues were found via code inspection, so
I don't have any specific use cases where things were not working, nor
any concrete performance numbers.
There are two issues related to min_align_mask that this patch series
does not attempt to fix. First, it does not address the case where
min_align_mask is larger than the IOVA granule. Doing so requires
changes to IOVA allocation, and is not specific to when swiotlb bounce
buffers are used. This is not a problem in practice today, since the
only driver which uses min_align_mask is nvme, which sets it to 4096.
The second issue this series does not address is the fact that extra
swiotlb slots adjacent to a bounce buffer can be exposed to untrusted
devices whose drivers use min_align_mask. Fixing this requires being
able to allocate padding slots at the beginning of a swiotlb allocation.
This is a rather significant change that I am not comfortable making.
Without being able to handle this, there is also little point to
clearing the padding at the start of such a buffer, since we can only
clear based on (IO_TLB_SIZE - 1) instead of iova_mask.
v6 -> v7:
- Remove unsafe attempt to clear padding at start of swiotlb buffer
- Rewrite commit message for min_align_mask commit to better explain
the problem it's fixing
- Rebase on iommu/core
- Acknowledge unsolved issues in cover letter
v5 -> v6:
- Remove unnecessary line break
- Remove redundant config check
v4 -> v5:
- Fix xen build error
- Move _swiotlb refactor into its own patch
v3 -> v4:
- Fold _swiotlb functions into _page functions
- Add patch to align swiotlb buffer to iovad granule
- Combine if checks in iommu_dma_sync_sg_* functions
v2 -> v3:
- Add new patch to address min_align_mask bug
- Set SKIP_CPU_SYNC flag after syncing in map/unmap
- Properly call arch_sync_dma_for_cpu in iommu_dma_sync_sg_for_cpu
v1 -> v2:
- Split fixes into dedicated patches
- Less invasive changes to fix arch_sync when mapping
- Leave dev_is_untrusted check for strict iommu
David Stevens (7):
dma-iommu: fix sync_sg with swiotlb
dma-iommu: fix arch_sync_dma for map
dma-iommu: skip extra sync during unmap w/swiotlb
dma-iommu: fold _swiotlb helpers into callers
dma-iommu: Check CONFIG_SWIOTLB more broadly
swiotlb: support aligned swiotlb buffers
dma-iommu: account for min_align_mask w/swiotlb
drivers/iommu/dma-iommu.c | 188 +++++++++++++++++---------------------
drivers/xen/swiotlb-xen.c | 2 +-
include/linux/swiotlb.h | 3 +-
kernel/dma/swiotlb.c | 11 ++-
4 files changed, 93 insertions(+), 111 deletions(-)
--
2.33.0.259.gc128427fd7-goog
Powered by blists - more mailing lists