[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aRJhSkj6S48G_pHI@google.com>
Date: Mon, 10 Nov 2025 22:03:54 +0000
From: David Matlack <dmatlack@...gle.com>
To: Alex Mastro <amastro@...com>
Cc: Alex Williamson <alex@...zbot.org>, Shuah Khan <shuah@...nel.org>,
kvm@...r.kernel.org, linux-kselftest@...r.kernel.org,
linux-kernel@...r.kernel.org, Jason Gunthorpe <jgg@...pe.ca>
Subject: Re: [PATCH 1/4] vfio: selftests: add iova range query helpers
On 2025-11-10 01:10 PM, Alex Mastro wrote:
> +/*
> + * Return iova ranges for the device's container. Normalize vfio_iommu_type1 to
> + * report iommufd's iommu_iova_range. Free with free().
> + */
> +static struct iommu_iova_range *vfio_iommu_iova_ranges(struct vfio_pci_device *device,
> + size_t *nranges)
> +{
> + struct vfio_iommu_type1_info_cap_iova_range *cap_range;
> + struct vfio_iommu_type1_info *buf;
nit: Maybe name this variable `info` here and in vfio_iommu_info_buf()
and vfio_iommu_info_cap_hdr()? It is not an opaque buffer.
> + struct vfio_info_cap_header *hdr;
> + struct iommu_iova_range *ranges = NULL;
> +
> + buf = vfio_iommu_info_buf(device);
nit: How about naming this vfio_iommu_get_info() since it actually
fetches the info from VFIO? (It doesn't just allocate a buffer.)
> + VFIO_ASSERT_NOT_NULL(buf);
This assert is unnecessary.
> +
> + hdr = vfio_iommu_info_cap_hdr(buf, VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE);
> + if (!hdr)
> + goto free_buf;
Is this to account for running on old versions of VFIO? Or are there
some scenarios when VFIO can't report the list of IOVA ranges?
> +
> + cap_range = container_of(hdr, struct vfio_iommu_type1_info_cap_iova_range, header);
> + if (!cap_range->nr_iovas)
> + goto free_buf;
> +
> + ranges = malloc(cap_range->nr_iovas * sizeof(*ranges));
> + VFIO_ASSERT_NOT_NULL(ranges);
> +
> + for (u32 i = 0; i < cap_range->nr_iovas; i++) {
> + ranges[i] = (struct iommu_iova_range){
> + .start = cap_range->iova_ranges[i].start,
> + .last = cap_range->iova_ranges[i].end,
> + };
> + }
> +
> + *nranges = cap_range->nr_iovas;
> +
> +free_buf:
> + free(buf);
> + return ranges;
> +}
> +
> +/* Return iova ranges of the device's IOAS. Free with free() */
> +struct iommu_iova_range *iommufd_iova_ranges(struct vfio_pci_device *device,
> + size_t *nranges)
> +{
> + struct iommu_iova_range *ranges;
> + int ret;
> +
> + struct iommu_ioas_iova_ranges query = {
> + .size = sizeof(query),
> + .ioas_id = device->ioas_id,
> + };
> +
> + ret = ioctl(device->iommufd, IOMMU_IOAS_IOVA_RANGES, &query);
> + VFIO_ASSERT_EQ(ret, -1);
> + VFIO_ASSERT_EQ(errno, EMSGSIZE);
> + VFIO_ASSERT_GT(query.num_iovas, 0);
> +
> + ranges = malloc(query.num_iovas * sizeof(*ranges));
> + VFIO_ASSERT_NOT_NULL(ranges);
> +
> + query.allowed_iovas = (uintptr_t)ranges;
> +
> + ioctl_assert(device->iommufd, IOMMU_IOAS_IOVA_RANGES, &query);
> + *nranges = query.num_iovas;
> +
> + return ranges;
> +}
> +
> +struct iommu_iova_range *vfio_pci_iova_ranges(struct vfio_pci_device *device,
> + size_t *nranges)
nit: Both iommufd and VFIO represent the number of IOVA ranges as a u32.
Perhaps we should do the same in VFIO selftests?
Powered by blists - more mailing lists