lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 13 Dec 2023 11:54:15 -0800
From:   Nicolin Chen <nicolinc@...dia.com>
To:     Jason Gunthorpe <jgg@...dia.com>
CC:     Yi Liu <yi.l.liu@...el.com>,
        "Giani, Dhaval" <Dhaval.Giani@....com>,
        Vasant Hegde <vasant.hegde@....com>,
        Suravee Suthikulpanit <suravee.suthikulpanit@....com>,
        <joro@...tes.org>, <alex.williamson@...hat.com>,
        <kevin.tian@...el.com>, <robin.murphy@....com>,
        <baolu.lu@...ux.intel.com>, <cohuck@...hat.com>,
        <eric.auger@...hat.com>, <kvm@...r.kernel.org>,
        <mjrosato@...ux.ibm.com>, <chao.p.peng@...ux.intel.com>,
        <yi.y.sun@...ux.intel.com>, <peterx@...hat.com>,
        <jasowang@...hat.com>, <shameerali.kolothum.thodi@...wei.com>,
        <lulu@...hat.com>, <iommu@...ts.linux.dev>,
        <linux-kernel@...r.kernel.org>, <linux-kselftest@...r.kernel.org>,
        <zhenzhong.duan@...el.com>, <joao.m.martins@...cle.com>,
        <xin.zeng@...el.com>, <yan.y.zhao@...el.com>
Subject: Re: [PATCH v6 0/6] iommufd: Add nesting infrastructure (part 2/2)

On Wed, Dec 13, 2023 at 08:40:55AM -0400, Jason Gunthorpe wrote:
> On Tue, Dec 12, 2023 at 12:05:41PM -0800, Nicolin Chen wrote:
> > > > // iommufd_private.h
> > > > 
> > > > enum iommufd_object_type {
> > > > 	...
> > > > +	IOMMUFD_OBJ_VIOMMU,
> > > > 	...
> > > > };
> > > > 
> > > > +struct iommufd_viommu {
> > > > +	struct iommufd_object obj;
> > > > +	struct iommufd_hwpt_paging *hwpt;
> > > > +	struct xarray devices;
> > > > +};
> > > > 
> > > > struct iommufd_hwpt_paging hwpt {
> > > > 	...
> > > > +	struct list_head viommu_list;
> > > > 	...
> > > > };
> > > 
> > > I'd probably first try to go backwards and link the hwpt to the
> > > viommu.
> > 
> > I think a VM should have only one hwpt_paging object while one
> > or more viommu objects, so we could do only viommu->hwpt_paging
> > and hwpt_paging->viommu_list. How to go backwards?
> 
> That is probably how things would work but I don't know if it makes
> sense to enforce it in the kernel logic..
> 
> Point the S2 to a list of viommu objects it is linked to

Hmm, isn't that hwpt_paging->viommu_list already?

> > > The second version maybe we have the xarray, or maybe we just push the
> > > xarray to the eventual viommu series.
> > 
> > I think that I still don't get the purpose of the xarray here.
> > It was needed previously because a cache invalidate per hwpt
> > doesn't know which device. Now IOMMUFD_DEV_INVALIDATE knows.
> > 
> > Maybe it's related to that narrative "logically we could have
> > multiple mappings per iommufd" that you mentioned above. Mind
> > elaborating a bit?
> > 
> > In my mind, viommu is allocated by VMM per piommu, by detecting
> > the piommu_id via hw_info. In that case, viommu can only have
> > one unique device list. If IOMMUFD_DEV_INVALIDATE passes in the
> > dev_id, we don't really need a mapping of vRID-pRID in a multi-
> > viommu case either? In another word, VMM already has a mapping
> > from vRID to dev_id, so it could call the DEV_INVALIDATE ioctl
> > in the first place?
> 
> The xarray exists to optimize the invalidation flow.
> 
> For SW you can imagine issuing an invalidation against the viommu
> itself and *all* commands, be they ASID or ATC invalidations can be
> processed in one shot. The xarray allows converting the vSID to pSID
> to process ATC invalidations, and the viommu object forces a single
> VMID to handle the ATC invalidations. If we want to do this, I don't
> know.

I drafted some patches with IOMMUFD_DEV_INVALIDATE yesterday,
and realized the same problem that you pointed out here: how
VMM should handle a group of commands interlaced with ASID and
ATC commands. If VMM dispatches commands into two groups, the
executions of the commands will be in a different order than
what the guest kernel issued in. This might be bitter if there
is an error occurring in the middle of command executions, in
which case some later invalidations are done successfully but
the CONS index would have to stop at a command prior.

And even if there are only ATC invalidations in a guest queue,
there's no guarantee that all commands are for the same dev_id,
i.e. ATC invalidations themselves would be dispatched into more
groups and separate IOMMUFD_DEV_INVALIDATE calls.

With the xarray, perhaps we could provide a viommu_id in data
structure of the current iommu_hwpt_invalidate, i.e. reshaping
the existing invalidate uAPI per viommu, so it can be reused by
ATC invalidations too instead of adding IOMMUFD_DEV_INVALIDATE?
Then we wouldn't have the out-of-order execution problem above.

> For HW, the xarray holds the vSID to pSID mapping that must be
> programmed into the HW operating the dedicated invalidation queue.

Ah, right! VCMDQ at least needs that.

Thanks
Nicolin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ