lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200630101947.1d45ac94@jacob-builder>
Date:   Tue, 30 Jun 2020 10:19:47 -0700
From:   Jacob Pan <jacob.jun.pan@...ux.intel.com>
To:     Lu Baolu <baolu.lu@...ux.intel.com>
Cc:     iommu@...ts.linux-foundation.org,
        LKML <linux-kernel@...r.kernel.org>,
        Joerg Roedel <joro@...tes.org>,
        David Woodhouse <dwmw2@...radead.org>,
        Yi Liu <yi.l.liu@...el.com>,
        "Tian, Kevin" <kevin.tian@...el.com>,
        Raj Ashok <ashok.raj@...el.com>,
        Eric Auger <eric.auger@...hat.com>,
        jacob.jun.pan@...ux.intel.com
Subject: Re: [PATCH 4/7] iommu/vt-d: Handle non-page aligned address

On Thu, 25 Jun 2020 18:05:52 +0800
Lu Baolu <baolu.lu@...ux.intel.com> wrote:

> Hi,
> 
> On 2020/6/23 23:43, Jacob Pan wrote:
> > From: Liu Yi L <yi.l.liu@...el.com>
> > 
> > Address information for device TLB invalidation comes from userspace
> > when device is directly assigned to a guest with vIOMMU support.
> > VT-d requires page aligned address. This patch checks and enforce
> > address to be page aligned, otherwise reserved bits can be set in
> > the invalidation descriptor. Unrecoverable fault will be reported
> > due to non-zero value in the reserved bits.
> > 
> > Signed-off-by: Liu Yi L <yi.l.liu@...el.com>
> > Signed-off-by: Jacob Pan <jacob.jun.pan@...ux.intel.com>
> > ---
> >   drivers/iommu/intel/dmar.c | 19 +++++++++++++++++--
> >   1 file changed, 17 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
> > index d9f973fa1190..53f4e5003620 100644
> > --- a/drivers/iommu/intel/dmar.c
> > +++ b/drivers/iommu/intel/dmar.c
> > @@ -1455,9 +1455,24 @@ void qi_flush_dev_iotlb_pasid(struct
> > intel_iommu *iommu, u16 sid, u16 pfsid,
> >   	 * Max Invs Pending (MIP) is set to 0 for now until we
> > have DIT in
> >   	 * ECAP.
> >   	 */
> > -	desc.qw1 |= addr & ~mask;
> > -	if (size_order)
> > +	if (addr & ~VTD_PAGE_MASK)
> > +		pr_warn_ratelimited("Invalidate non-page aligned
> > address %llx\n", addr); +
> > +	if (size_order) {
> > +		/* Take page address */
> > +		desc.qw1 |= QI_DEV_EIOTLB_ADDR(addr);  
> 
> If size_order == 0 (that means only a single page is about to be
> invalidated), do you still need to set ADDR field of the descriptor?
> 
Good catch! we should always set addr. I will move addr assignment out
of the if condition.
 .
> Best regards,
> baolu
> 
> > +		/*
> > +		 * Existing 0s in address below size_order may be
> > the least
> > +		 * significant bit, we must set them to 1s to
> > avoid having
> > +		 * smaller size than desired.
> > +		 */
> > +		desc.qw1 |= GENMASK_ULL(size_order +
> > VTD_PAGE_SHIFT,
> > +					VTD_PAGE_SHIFT);
> > +		/* Clear size_order bit to indicate size */
> > +		desc.qw1 &= ~mask;
> > +		/* Set the S bit to indicate flushing more than 1
> > page */ desc.qw1 |= QI_DEV_EIOTLB_SIZE;
> > +	}
> >   
> >   	qi_submit_sync(iommu, &desc, 1, 0);
> >   }
> >   

[Jacob Pan]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ