[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251012-fix-unmap-v4-0-9eefc90ed14c@fb.com>
Date: Sun, 12 Oct 2025 22:32:23 -0700
From: Alex Mastro <amastro@...com>
To: Alex Williamson <alex.williamson@...hat.com>
CC: Jason Gunthorpe <jgg@...pe.ca>,
Alejandro Jimenez
<alejandro.j.jimenez@...cle.com>,
<kvm@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
Alex Mastro <amastro@...com>
Subject: [PATCH v4 0/3] vfio: handle DMA map/unmap up to the addressable
limit
This patch series aims to fix vfio_iommu_type.c to support
VFIO_IOMMU_MAP_DMA and VFIO_IOMMU_UNMAP_DMA operations targeting IOVA
ranges which lie against the addressable limit. i.e. ranges where
iova_start + iova_size would overflow to exactly zero.
Today, the VFIO UAPI has an inconsistency: The
VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE capability of VFIO_IOMMU_GET_INFO
reports that ranges up to the end of the address space are available
for use, but are not really due to bugs in handling boundary conditions.
For example:
vfio_find_dma_first_node is called to find the first dma node to unmap
given an unmap range of [iova..iova+size). The check at the end of the
function intends to test if the dma result lies beyond the end of the
unmap range. The condition is incorrectly satisfied when iova+size
overflows to zero, causing the function to return NULL.
The same issue happens inside vfio_dma_do_unmap's while loop.
This bug was also reported by Alejandro Jimenez in [1][2].
Of primary concern are locations in the current code which perform
comparisons against (iova + size) expressions, where overflow to zero
is possible.
The initial list of candidate locations to audit was taken from the
following:
$ rg 'iova.*\+.*size' -n drivers/vfio/vfio_iommu_type1.c | rg -v '\- 1'
173: else if (start >= dma->iova + dma->size)
192: if (start < dma->iova + dma->size) {
216: if (new->iova + new->size <= dma->iova)
1060: dma_addr_t iova = dma->iova, end = dma->iova + dma->size;
1233: if (dma && dma->iova + dma->size != iova + size)
1380: if (dma && dma->iova + dma->size != iova + size)
1501: ret = vfio_iommu_map(iommu, iova + dma->size, pfn, npage,
1504: vfio_unpin_pages_remote(dma, iova + dma->size, pfn,
1721: while (iova < dma->iova + dma->size) {
1743: i = iova + size;
1744: while (i < dma->iova + dma->size &&
1754: size_t n = dma->iova + dma->size - iova;
1785: iova += size;
1810: while (iova < dma->iova + dma->size) {
1823: i = iova + size;
1824: while (i < dma->iova + dma->size &&
2919: if (range.iova + range.size < range.iova)
This series spend the first couple commits making mechanical preparations
before the fix lands in the last commit.
[1] https://lore.kernel.org/qemu-devel/20250919213515.917111-1-alejandro.j.jimenez@oracle.com/
[2] https://lore.kernel.org/all/68e18f2c-79ad-45ec-99b9-99ff68ba5438@oracle.com/
Signed-off-by: Alex Mastro <amastro@...com>
---
Changes in v4:
- Fix type assigned to iova_end
- Clarify overflow checking, add checks to vfio_iommu_type1_dirty_pages
- Consider npage==0 an error for vfio_iommu_type1_pin_pages
- Link to v3: https://lore.kernel.org/r/20251010-fix-unmap-v3-0-306c724d6998@fb.com
Changes in v3:
- Fix handling of unmap_all in vfio_dma_do_unmap
- Fix !range.size to return -EINVAL for VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP
- Dedup !range.size checking
- Return -EOVERFLOW on check_*_overflow
- Link to v2: https://lore.kernel.org/r/20251007-fix-unmap-v2-0-759bceb9792e@fb.com
Changes in v2:
- Change to patch series rather than single commit
- Expand scope to fix more than just the unmap discovery path
- Link to v1: https://lore.kernel.org/r/20251005-fix-unmap-v1-1-6687732ed44e@fb.com
---
Alex Mastro (3):
vfio/type1: sanitize for overflow using check_*_overflow
vfio/type1: move iova increment to unmap_unpin_* caller
vfio/type1: handle DMA map/unmap up to the addressable limit
drivers/vfio/vfio_iommu_type1.c | 173 +++++++++++++++++++++++++---------------
1 file changed, 110 insertions(+), 63 deletions(-)
---
base-commit: 407aa63018d15c35a34938633868e61174d2ef6e
change-id: 20251005-fix-unmap-c3f3e87dabfa
Best regards,
--
Alex Mastro <amastro@...com>
Powered by blists - more mailing lists