[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <21bc406b-96f0-ae99-1606-9493f3cc2621@linux.intel.com>
Date: Mon, 28 Mar 2022 16:53:06 +0800
From: Lu Baolu <baolu.lu@...ux.intel.com>
To: David Stevens <stevensd@...omium.org>,
Kevin Tian <kevin.tian@...el.com>
Cc: baolu.lu@...ux.intel.com, Tina Zhang <tina.zhang@...el.com>,
iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] iommu/vt-d: calculate mask for non-aligned flushes
Hi David,
On 2022/3/22 14:35, David Stevens wrote:
> From: David Stevens <stevensd@...omium.org>
>
> Calculate the appropriate mask for non-size-aligned page selective
> invalidation. Since psi uses the mask value to mask out the lower order
> bits of the target address, properly flushing the iotlb requires using a
> mask value such that [pfn, pfn+pages) all lie within the flushed
> size-aligned region. This is not normally an issue because iova.c
> always allocates iovas that are aligned to their size. However, iovas
> which come from other sources (e.g. userspace via VFIO) may not be
> aligned.
This is bug fix, right? Can you please add "Fixes" and "Cc stable" tags?
>
> Signed-off-by: David Stevens <stevensd@...omium.org>
> ---
> v1 -> v2:
> - Calculate an appropriate mask for non-size-aligned iovas instead
> of falling back to domain selective flush.
>
> drivers/iommu/intel/iommu.c | 27 ++++++++++++++++++++++++---
> 1 file changed, 24 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index 5b196cfe9ed2..ab2273300346 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -1717,7 +1717,8 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
> unsigned long pfn, unsigned int pages,
> int ih, int map)
> {
> - unsigned int mask = ilog2(__roundup_pow_of_two(pages));
> + unsigned int aligned_pages = __roundup_pow_of_two(pages);
> + unsigned int mask = ilog2(aligned_pages);
> uint64_t addr = (uint64_t)pfn << VTD_PAGE_SHIFT;
> u16 did = domain->iommu_did[iommu->seq_id];
>
> @@ -1729,10 +1730,30 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
> if (domain_use_first_level(domain)) {
> domain_flush_piotlb(iommu, domain, addr, pages, ih);
> } else {
> + unsigned long bitmask = aligned_pages - 1;
> +
> + /*
> + * PSI masks the low order bits of the base address. If the
> + * address isn't aligned to the mask, then compute a mask value
> + * needed to ensure the target range is flushed.
> + */
> + if (unlikely(bitmask & pfn)) {
> + unsigned long end_pfn = pfn + pages - 1, shared_bits;
> +
> + /*
> + * Since end_pfn <= pfn + bitmask, the only way bits
> + * higher than bitmask can differ in pfn and end_pfn is
> + * by carrying. This means after masking out bitmask,
> + * high bits starting with the first set bit in
> + * shared_bits are all equal in both pfn and end_pfn.
> + */
> + shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
> + mask = shared_bits ? __ffs(shared_bits) : BITS_PER_LONG;
Can you please add some lines in the commit message to explain how this
magic line works? It's easier for people to understand it if you can
take a real example. :-)
Best regards,
baolu
> + }
> +
> /*
> * Fallback to domain selective flush if no PSI support or
> - * the size is too big. PSI requires page size to be 2 ^ x,
> - * and the base address is naturally aligned to the size.
> + * the size is too big.
> */
> if (!cap_pgsel_inv(iommu->cap) ||
> mask > cap_max_amask_val(iommu->cap))
Powered by blists - more mailing lists