lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <MWHPR11MB16451AFC4CF6EAA015D83D0A8CBD0@MWHPR11MB1645.namprd11.prod.outlook.com>
Date:   Fri, 15 May 2020 23:36:01 +0000
From:   "Tian, Kevin" <kevin.tian@...el.com>
To:     "Raj, Ashok" <ashok.raj@...el.com>
CC:     "Liu, Yi L" <yi.l.liu@...el.com>,
        "alex.williamson@...hat.com" <alex.williamson@...hat.com>,
        "eric.auger@...hat.com" <eric.auger@...hat.com>,
        "jacob.jun.pan@...ux.intel.com" <jacob.jun.pan@...ux.intel.com>,
        "joro@...tes.org" <joro@...tes.org>,
        "Tian, Jun J" <jun.j.tian@...el.com>,
        "Sun, Yi Y" <yi.y.sun@...el.com>,
        "jean-philippe@...aro.org" <jean-philippe@...aro.org>,
        "peterx@...hat.com" <peterx@...hat.com>,
        "iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
        "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "Wu, Hao" <hao.wu@...el.com>
Subject: RE: (a design open) RE: [PATCH v1 6/8] vfio/type1: Bind guest page
 tables to host

> From: Raj, Ashok <ashok.raj@...el.com>
> Sent: Friday, May 15, 2020 11:20 PM
> 
> On Fri, May 15, 2020 at 12:39:14AM -0700, Tian, Kevin wrote:
> > Hi, Alex,
> >
> > When working on an updated version Yi and I found an design open
> > which needs your guidance.
> >
> > In concept nested translation can be incarnated as one GPA->HPA page
> > table and multiple GVA->GPA page tables per VM. It means one container
> > is sufficient to include all SVA-capable devices assigned to the same guest,
> > as there is just one iova space (GPA->HPA) to be managed by vfio iommu
> > map/unmap api. GVA->GPA page tables are bound through the new api
> > introduced in this patch. It is different from legacy shadow translation
> > which merges GIOVA->GPA->HPA into GIOVA->HPA thus each device
> requires
> > its own iova space and must be in a separate container.
> >
> > However supporting multiple SVA-capable devices in one container
> > imposes one challenge. Now the bind_guest_pgtbl is implemented as
> > iommu type1 api. From the definition of iommu type 1 any operation
> > should be applied to all devices within the same container, just like
> > dma map/unmap. However this philosophy is incorrect regarding to
> > page table binding. We must follow the guest binding requirements
> > between its page tables and assigned devices, otherwise every bound
> > address space is suddenly accessible by all assigned devices thus creating
> > security holes.
> 
> The above 2 paragraphs are bit confusing :-) but what really matters
> is the below:
> >
> > Do you think whether it's possible to extend iommu api to accept
> > device specific cmd? If not, moving it to vfio-pci or vfio-mdev sounds
> > also problematic, as PASID and page tables are IOMMU things which
> > are not touched by vfio device drivers today.
> 
> All you are referring to is when admin groups multiple devices in a
> single container, you are saying you can't give isolation to them
> for SVA purposes. This is logically equivalent to a single group with
> multiple devices.
> 
> 	- Such as devices behind p2p bridge
> 	- MFD's or devices behind switches or hieararchies without ACS
> 	  support for instance.
> 
> By limitation you mean, disable PASID on those devices in a single
> container?

the limitation means disabling nesting capability in such container
and yes it implies not exposing PASID capability to userspace too.

> 
> what about ATS?

ATS is possibly fine. VFIO exposes it to userspace even today.

Thanks
Kevin

> 
> >
> > Alternatively can we impose the limitation that nesting APIs can be
> > only enabled for singleton containers which contains only one device?
> > This basically falls back to the state of legacy shadow translation
> > vIOMMU. and our current Qemu vIOMMU implementation actually
> > does this way regardless of whether nesting is used. Do you think
> > whether such tradeoff is acceptable as a starting point?
> >
> > Thanks
> > Kevin
> >
> > > -----Original Message-----
> > > From: Liu, Yi L <yi.l.liu@...el.com>
> > > Sent: Sunday, March 22, 2020 8:32 PM
> > > To: alex.williamson@...hat.com; eric.auger@...hat.com
> > > Cc: Tian, Kevin <kevin.tian@...el.com>; jacob.jun.pan@...ux.intel.com;
> > > joro@...tes.org; Raj, Ashok <ashok.raj@...el.com>; Liu, Yi L
> > > <yi.l.liu@...el.com>; Tian, Jun J <jun.j.tian@...el.com>; Sun, Yi Y
> > > <yi.y.sun@...el.com>; jean-philippe@...aro.org; peterx@...hat.com;
> > > iommu@...ts.linux-foundation.org; kvm@...r.kernel.org; linux-
> > > kernel@...r.kernel.org; Wu, Hao <hao.wu@...el.com>
> > > Subject: [PATCH v1 6/8] vfio/type1: Bind guest page tables to host
> > >
> > > From: Liu Yi L <yi.l.liu@...el.com>
> > >
> > > VFIO_TYPE1_NESTING_IOMMU is an IOMMU type which is backed by
> > > hardware
> > > IOMMUs that have nesting DMA translation (a.k.a dual stage address
> > > translation). For such hardware IOMMUs, there are two stages/levels of
> > > address translation, and software may let userspace/VM to own the first-
> > > level/stage-1 translation structures. Example of such usage is vSVA (
> > > virtual Shared Virtual Addressing). VM owns the first-level/stage-1
> > > translation structures and bind the structures to host, then hardware
> > > IOMMU would utilize nesting translation when doing DMA translation fo
> > > the devices behind such hardware IOMMU.
> > >
> > > This patch adds vfio support for binding guest translation (a.k.a stage 1)
> > > structure to host iommu. And for VFIO_TYPE1_NESTING_IOMMU, not
> only
> > > bind
> > > guest page table is needed, it also requires to expose interface to guest
> > > for iommu cache invalidation when guest modified the first-level/stage-1
> > > translation structures since hardware needs to be notified to flush stale
> > > iotlbs. This would be introduced in next patch.
> > >
> > > In this patch, guest page table bind and unbind are done by using flags
> > > VFIO_IOMMU_BIND_GUEST_PGTBL and
> > > VFIO_IOMMU_UNBIND_GUEST_PGTBL under IOCTL
> > > VFIO_IOMMU_BIND, the bind/unbind data are conveyed by
> > > struct iommu_gpasid_bind_data. Before binding guest page table to host,
> > > VM should have got a PASID allocated by host via
> > > VFIO_IOMMU_PASID_REQUEST.
> > >
> > > Bind guest translation structures (here is guest page table) to host
> > > are the first step to setup vSVA (Virtual Shared Virtual Addressing).
> > >
> > > Cc: Kevin Tian <kevin.tian@...el.com>
> > > CC: Jacob Pan <jacob.jun.pan@...ux.intel.com>
> > > Cc: Alex Williamson <alex.williamson@...hat.com>
> > > Cc: Eric Auger <eric.auger@...hat.com>
> > > Cc: Jean-Philippe Brucker <jean-philippe@...aro.org>
> > > Signed-off-by: Jean-Philippe Brucker <jean-philippe@...aro.com>
> > > Signed-off-by: Liu Yi L <yi.l.liu@...el.com>
> > > Signed-off-by: Jacob Pan <jacob.jun.pan@...ux.intel.com>
> > > ---
> > >  drivers/vfio/vfio_iommu_type1.c | 158
> > > ++++++++++++++++++++++++++++++++++++++++
> > >  include/uapi/linux/vfio.h       |  46 ++++++++++++
> > >  2 files changed, 204 insertions(+)
> > >
> > > diff --git a/drivers/vfio/vfio_iommu_type1.c
> > > b/drivers/vfio/vfio_iommu_type1.c
> > > index 82a9e0b..a877747 100644
> > > --- a/drivers/vfio/vfio_iommu_type1.c
> > > +++ b/drivers/vfio/vfio_iommu_type1.c
> > > @@ -130,6 +130,33 @@ struct vfio_regions {
> > >  #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)\
> > >  (!list_empty(&iommu->domain_list))
> > >
> > > +struct domain_capsule {
> > > +struct iommu_domain *domain;
> > > +void *data;
> > > +};
> > > +
> > > +/* iommu->lock must be held */
> > > +static int vfio_iommu_for_each_dev(struct vfio_iommu *iommu,
> > > +      int (*fn)(struct device *dev, void *data),
> > > +      void *data)
> > > +{
> > > +struct domain_capsule dc = {.data = data};
> > > +struct vfio_domain *d;
> > > +struct vfio_group *g;
> > > +int ret = 0;
> > > +
> > > +list_for_each_entry(d, &iommu->domain_list, next) {
> > > +dc.domain = d->domain;
> > > +list_for_each_entry(g, &d->group_list, next) {
> > > +ret = iommu_group_for_each_dev(g->iommu_group,
> > > +       &dc, fn);
> > > +if (ret)
> > > +break;
> > > +}
> > > +}
> > > +return ret;
> > > +}
> > > +
> > >  static int put_pfn(unsigned long pfn, int prot);
> > >
> > >  /*
> > > @@ -2314,6 +2341,88 @@ static int
> > > vfio_iommu_info_add_nesting_cap(struct vfio_iommu *iommu,
> > >  return 0;
> > >  }
> > >
> > > +static int vfio_bind_gpasid_fn(struct device *dev, void *data)
> > > +{
> > > +struct domain_capsule *dc = (struct domain_capsule *)data;
> > > +struct iommu_gpasid_bind_data *gbind_data =
> > > +(struct iommu_gpasid_bind_data *) dc->data;
> > > +
> > > +return iommu_sva_bind_gpasid(dc->domain, dev, gbind_data);
> > > +}
> > > +
> > > +static int vfio_unbind_gpasid_fn(struct device *dev, void *data)
> > > +{
> > > +struct domain_capsule *dc = (struct domain_capsule *)data;
> > > +struct iommu_gpasid_bind_data *gbind_data =
> > > +(struct iommu_gpasid_bind_data *) dc->data;
> > > +
> > > +return iommu_sva_unbind_gpasid(dc->domain, dev,
> > > +gbind_data->hpasid);
> > > +}
> > > +
> > > +/**
> > > + * Unbind specific gpasid, caller of this function requires hold
> > > + * vfio_iommu->lock
> > > + */
> > > +static long vfio_iommu_type1_do_guest_unbind(struct vfio_iommu
> > > *iommu,
> > > +struct iommu_gpasid_bind_data *gbind_data)
> > > +{
> > > +return vfio_iommu_for_each_dev(iommu,
> > > +vfio_unbind_gpasid_fn, gbind_data);
> > > +}
> > > +
> > > +static long vfio_iommu_type1_bind_gpasid(struct vfio_iommu *iommu,
> > > +struct iommu_gpasid_bind_data *gbind_data)
> > > +{
> > > +int ret = 0;
> > > +
> > > +mutex_lock(&iommu->lock);
> > > +if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) {
> > > +ret = -EINVAL;
> > > +goto out_unlock;
> > > +}
> > > +
> > > +ret = vfio_iommu_for_each_dev(iommu,
> > > +vfio_bind_gpasid_fn, gbind_data);
> > > +/*
> > > + * If bind failed, it may not be a total failure. Some devices
> > > + * within the iommu group may have bind successfully. Although
> > > + * we don't enable pasid capability for non-singletion iommu
> > > + * groups, a unbind operation would be helpful to ensure no
> > > + * partial binding for an iommu group.
> > > + */
> > > +if (ret)
> > > +/*
> > > + * Undo all binds that already succeeded, no need to
> > > + * check the return value here since some device within
> > > + * the group has no successful bind when coming to this
> > > + * place switch.
> > > + */
> > > +vfio_iommu_type1_do_guest_unbind(iommu, gbind_data);
> > > +
> > > +out_unlock:
> > > +mutex_unlock(&iommu->lock);
> > > +return ret;
> > > +}
> > > +
> > > +static long vfio_iommu_type1_unbind_gpasid(struct vfio_iommu
> *iommu,
> > > +struct iommu_gpasid_bind_data *gbind_data)
> > > +{
> > > +int ret = 0;
> > > +
> > > +mutex_lock(&iommu->lock);
> > > +if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) {
> > > +ret = -EINVAL;
> > > +goto out_unlock;
> > > +}
> > > +
> > > +ret = vfio_iommu_type1_do_guest_unbind(iommu, gbind_data);
> > > +
> > > +out_unlock:
> > > +mutex_unlock(&iommu->lock);
> > > +return ret;
> > > +}
> > > +
> > >  static long vfio_iommu_type1_ioctl(void *iommu_data,
> > >     unsigned int cmd, unsigned long arg)
> > >  {
> > > @@ -2471,6 +2580,55 @@ static long vfio_iommu_type1_ioctl(void
> > > *iommu_data,
> > >  default:
> > >  return -EINVAL;
> > >  }
> > > +
> > > +} else if (cmd == VFIO_IOMMU_BIND) {
> > > +struct vfio_iommu_type1_bind bind;
> > > +u32 version;
> > > +int data_size;
> > > +void *gbind_data;
> > > +int ret;
> > > +
> > > +minsz = offsetofend(struct vfio_iommu_type1_bind, flags);
> > > +
> > > +if (copy_from_user(&bind, (void __user *)arg, minsz))
> > > +return -EFAULT;
> > > +
> > > +if (bind.argsz < minsz)
> > > +return -EINVAL;
> > > +
> > > +/* Get the version of struct iommu_gpasid_bind_data */
> > > +if (copy_from_user(&version,
> > > +(void __user *) (arg + minsz),
> > > +sizeof(version)))
> > > +return -EFAULT;
> > > +
> > > +data_size = iommu_uapi_get_data_size(
> > > +IOMMU_UAPI_BIND_GPASID, version);
> > > +gbind_data = kzalloc(data_size, GFP_KERNEL);
> > > +if (!gbind_data)
> > > +return -ENOMEM;
> > > +
> > > +if (copy_from_user(gbind_data,
> > > + (void __user *) (arg + minsz), data_size)) {
> > > +kfree(gbind_data);
> > > +return -EFAULT;
> > > +}
> > > +
> > > +switch (bind.flags & VFIO_IOMMU_BIND_MASK) {
> > > +case VFIO_IOMMU_BIND_GUEST_PGTBL:
> > > +ret = vfio_iommu_type1_bind_gpasid(iommu,
> > > +   gbind_data);
> > > +break;
> > > +case VFIO_IOMMU_UNBIND_GUEST_PGTBL:
> > > +ret = vfio_iommu_type1_unbind_gpasid(iommu,
> > > +     gbind_data);
> > > +break;
> > > +default:
> > > +ret = -EINVAL;
> > > +break;
> > > +}
> > > +kfree(gbind_data);
> > > +return ret;
> > >  }
> > >
> > >  return -ENOTTY;
> > > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > > index ebeaf3e..2235bc6 100644
> > > --- a/include/uapi/linux/vfio.h
> > > +++ b/include/uapi/linux/vfio.h
> > > @@ -14,6 +14,7 @@
> > >
> > >  #include <linux/types.h>
> > >  #include <linux/ioctl.h>
> > > +#include <linux/iommu.h>
> > >
> > >  #define VFIO_API_VERSION0
> > >
> > > @@ -853,6 +854,51 @@ struct vfio_iommu_type1_pasid_request {
> > >   */
> > >  #define VFIO_IOMMU_PASID_REQUEST_IO(VFIO_TYPE, VFIO_BASE +
> > > 22)
> > >
> > > +/**
> > > + * Supported flags:
> > > + *- VFIO_IOMMU_BIND_GUEST_PGTBL: bind guest page tables to host
> > > for
> > > + *nesting type IOMMUs. In @data field It takes struct
> > > + *iommu_gpasid_bind_data.
> > > + *- VFIO_IOMMU_UNBIND_GUEST_PGTBL: undo a bind guest page
> > > table operation
> > > + *invoked by VFIO_IOMMU_BIND_GUEST_PGTBL.
> > > + *
> > > + */
> > > +struct vfio_iommu_type1_bind {
> > > +__u32argsz;
> > > +__u32flags;
> > > +#define VFIO_IOMMU_BIND_GUEST_PGTBL(1 << 0)
> > > +#define VFIO_IOMMU_UNBIND_GUEST_PGTBL(1 << 1)
> > > +__u8data[];
> > > +};
> > > +
> > > +#define VFIO_IOMMU_BIND_MASK(VFIO_IOMMU_BIND_GUEST_PGTBL
> > > | \
> > > +
> > > VFIO_IOMMU_UNBIND_GUEST_PGTBL)
> > > +
> > > +/**
> > > + * VFIO_IOMMU_BIND - _IOW(VFIO_TYPE, VFIO_BASE + 23,
> > > + *struct vfio_iommu_type1_bind)
> > > + *
> > > + * Manage address spaces of devices in this container. Initially a TYPE1
> > > + * container can only have one address space, managed with
> > > + * VFIO_IOMMU_MAP/UNMAP_DMA.
> > > + *
> > > + * An IOMMU of type VFIO_TYPE1_NESTING_IOMMU can be managed
> by
> > > both MAP/UNMAP
> > > + * and BIND ioctls at the same time. MAP/UNMAP acts on the stage-2
> (host)
> > > page
> > > + * tables, and BIND manages the stage-1 (guest) page tables. Other types
> of
> > > + * IOMMU may allow MAP/UNMAP and BIND to coexist, where
> > > MAP/UNMAP controls
> > > + * the traffics only require single stage translation while BIND controls
> the
> > > + * traffics require nesting translation. But this depends on the underlying
> > > + * IOMMU architecture and isn't guaranteed. Example of this is the guest
> > > SVA
> > > + * traffics, such traffics need nesting translation to gain gVA->gPA and
> then
> > > + * gPA->hPA translation.
> > > + *
> > > + * Availability of this feature depends on the device, its bus, the
> underlying
> > > + * IOMMU and the CPU architecture.
> > > + *
> > > + * returns: 0 on success, -errno on failure.
> > > + */
> > > +#define VFIO_IOMMU_BIND_IO(VFIO_TYPE, VFIO_BASE + 23)
> > > +
> > >  /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU --------
> */
> > >
> > >  /*
> > > --
> > > 2.7.4
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ