[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5543656c-9b38-5fe6-7372-9a61a1269b5d@linux.intel.com>
Date: Wed, 1 Jul 2020 09:08:53 +0800
From: Lu Baolu <baolu.lu@...ux.intel.com>
To: Jacob Pan <jacob.jun.pan@...ux.intel.com>,
iommu@...ts.linux-foundation.org,
LKML <linux-kernel@...r.kernel.org>,
Joerg Roedel <joro@...tes.org>,
David Woodhouse <dwmw2@...radead.org>
Cc: baolu.lu@...ux.intel.com, Yi Liu <yi.l.liu@...el.com>,
"Tian, Kevin" <kevin.tian@...el.com>,
Raj Ashok <ashok.raj@...el.com>,
Eric Auger <eric.auger@...hat.com>
Subject: Re: [PATCH v2 4/7] iommu/vt-d: Handle non-page aligned address
On 7/1/20 5:07 AM, Jacob Pan wrote:
> From: Liu Yi L <yi.l.liu@...el.com>
>
> Address information for device TLB invalidation comes from userspace
> when device is directly assigned to a guest with vIOMMU support.
> VT-d requires page aligned address. This patch checks and enforce
> address to be page aligned, otherwise reserved bits can be set in the
> invalidation descriptor. Unrecoverable fault will be reported due to
> non-zero value in the reserved bits.
>
> Signed-off-by: Liu Yi L <yi.l.liu@...el.com>
> Signed-off-by: Jacob Pan <jacob.jun.pan@...ux.intel.com>
Fixes: 61a06a16e36d8 ("iommu/vt-d: Support flushing more translation
cache types")
Acked-by: Lu Baolu <baolu.lu@...ux.intel.com>
Best regards,
baolu
> ---
> drivers/iommu/intel/dmar.c | 20 ++++++++++++++++++--
> 1 file changed, 18 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
> index d9f973fa1190..3899f3161071 100644
> --- a/drivers/iommu/intel/dmar.c
> +++ b/drivers/iommu/intel/dmar.c
> @@ -1455,9 +1455,25 @@ void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid,
> * Max Invs Pending (MIP) is set to 0 for now until we have DIT in
> * ECAP.
> */
> - desc.qw1 |= addr & ~mask;
> - if (size_order)
> + if (addr & ~VTD_PAGE_MASK)
> + pr_warn_ratelimited("Invalidate non-page aligned address %llx\n", addr);
> +
> + /* Take page address */
> + desc.qw1 |= QI_DEV_EIOTLB_ADDR(addr);
> +
> + if (size_order) {
> + /*
> + * Existing 0s in address below size_order may be the least
> + * significant bit, we must set them to 1s to avoid having
> + * smaller size than desired.
> + */
> + desc.qw1 |= GENMASK_ULL(size_order + VTD_PAGE_SHIFT,
> + VTD_PAGE_SHIFT);
> + /* Clear size_order bit to indicate size */
> + desc.qw1 &= ~mask;
> + /* Set the S bit to indicate flushing more than 1 page */
> desc.qw1 |= QI_DEV_EIOTLB_SIZE;
> + }
>
> qi_submit_sync(iommu, &desc, 1, 0);
> }
>
Powered by blists - more mailing lists