lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 6 Jul 2020 16:28:40 -0700
From:   Jacob Pan <jacob.jun.pan@...ux.intel.com>
To:     Auger Eric <eric.auger@...hat.com>
Cc:     iommu@...ts.linux-foundation.org,
        LKML <linux-kernel@...r.kernel.org>,
        Lu Baolu <baolu.lu@...ux.intel.com>,
        Joerg Roedel <joro@...tes.org>,
        David Woodhouse <dwmw2@...radead.org>,
        Yi Liu <yi.l.liu@...el.com>,
        "Tian, Kevin" <kevin.tian@...el.com>,
        Raj Ashok <ashok.raj@...el.com>, jacob.jun.pan@...ux.intel.com
Subject: Re: [PATCH v3 4/7] iommu/vt-d: Handle non-page aligned address

On Thu, 2 Jul 2020 09:50:19 +0200
Auger Eric <eric.auger@...hat.com> wrote:

> Hi Jacob,
> 
> On 7/1/20 5:33 PM, Jacob Pan wrote:
> > From: Liu Yi L <yi.l.liu@...el.com>
> > 
> > Address information for device TLB invalidation comes from userspace
> > when device is directly assigned to a guest with vIOMMU support.
> > VT-d requires page aligned address. This patch checks and enforce
> > address to be page aligned, otherwise reserved bits can be set in
> > the invalidation descriptor. Unrecoverable fault will be reported
> > due to non-zero value in the reserved bits.  
> on the other hand if user space sends unaligned invalidations,
> shouldn't it be reported in some way?
> > 
> > Fixes: 61a06a16e36d8 ("iommu/vt-d: Support flushing more translation
> > cache types")
> > Acked-by: Lu Baolu <baolu.lu@...ux.intel.com>
> > Signed-off-by: Liu Yi L <yi.l.liu@...el.com>
> > Signed-off-by: Jacob Pan <jacob.jun.pan@...ux.intel.com>
> > 
> > Signed-off-by: Jacob Pan <jacob.jun.pan@...ux.intel.com>
> > ---
> >  drivers/iommu/intel/dmar.c | 20 ++++++++++++++++++--
> >  1 file changed, 18 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
> > index d9f973fa1190..3899f3161071 100644
> > --- a/drivers/iommu/intel/dmar.c
> > +++ b/drivers/iommu/intel/dmar.c
> > @@ -1455,9 +1455,25 @@ void qi_flush_dev_iotlb_pasid(struct
> > intel_iommu *iommu, u16 sid, u16 pfsid,
> >  	 * Max Invs Pending (MIP) is set to 0 for now until we
> > have DIT in
> >  	 * ECAP.
> >  	 */
> > -	desc.qw1 |= addr & ~mask;
> > -	if (size_order)
> > +	if (addr & ~VTD_PAGE_MASK)
> > +		pr_warn_ratelimited("Invalidate non-page aligned
> > address %llx\n", addr); +
> > +	/* Take page address */
> > +	desc.qw1 |= QI_DEV_EIOTLB_ADDR(addr);
> > +
> > +	if (size_order) {
> > +		/*
> > +		 * Existing 0s in address below size_order may be
> > the least
> > +		 * significant bit, we must set them to 1s to
> > avoid having
> > +		 * smaller size than desired.  
> Shouldn't you test the input addr against the size_order. Aren't they
> supposed to be consistent? Otherwise one should emit a warning at
> least?
Will check size_order and addr match to emit warning. Combine with
VTD_PAGE_MASK check above.

> > +		 */
> > +		desc.qw1 |= GENMASK_ULL(size_order +
> > VTD_PAGE_SHIFT,
> > +					VTD_PAGE_SHIFT);  
> nit: instead of working directly on .qw1, couldn't you perform all
> those manipulations directly on addr? and eventually override qw1 at
> the end?
That is good too, I just felt it is more readable, which fields are
being manipulated in qw1.

> > +		/* Clear size_order bit to indicate size */
> > +		desc.qw1 &= ~mask;
> > +		/* Set the S bit to indicate flushing more than 1
> > page */ desc.qw1 |= QI_DEV_EIOTLB_SIZE;
> > +	}
> >  
> >  	qi_submit_sync(iommu, &desc, 1, 0);
> >  }
> >   
> Thanks
> 
> Eric
> 

[Jacob Pan]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ