lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DS0PR11MB7529C310FAEA61B6E7988629C312A@DS0PR11MB7529.namprd11.prod.outlook.com>
Date:   Wed, 9 Aug 2023 09:30:12 +0000
From:   "Liu, Yi L" <yi.l.liu@...el.com>
To:     "Tian, Kevin" <kevin.tian@...el.com>,
        Nicolin Chen <nicolinc@...dia.com>,
        Jason Gunthorpe <jgg@...dia.com>
CC:     "joro@...tes.org" <joro@...tes.org>,
        "alex.williamson@...hat.com" <alex.williamson@...hat.com>,
        "robin.murphy@....com" <robin.murphy@....com>,
        "baolu.lu@...ux.intel.com" <baolu.lu@...ux.intel.com>,
        "cohuck@...hat.com" <cohuck@...hat.com>,
        "eric.auger@...hat.com" <eric.auger@...hat.com>,
        "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
        "mjrosato@...ux.ibm.com" <mjrosato@...ux.ibm.com>,
        "chao.p.peng@...ux.intel.com" <chao.p.peng@...ux.intel.com>,
        "yi.y.sun@...ux.intel.com" <yi.y.sun@...ux.intel.com>,
        "peterx@...hat.com" <peterx@...hat.com>,
        "jasowang@...hat.com" <jasowang@...hat.com>,
        "shameerali.kolothum.thodi@...wei.com" 
        <shameerali.kolothum.thodi@...wei.com>,
        "lulu@...hat.com" <lulu@...hat.com>,
        "suravee.suthikulpanit@....com" <suravee.suthikulpanit@....com>,
        "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
        "Duan, Zhenzhong" <zhenzhong.duan@...el.com>
Subject: RE: [PATCH v4 09/12] iommu/vt-d: Add iotlb flush for nested domain

> From: Tian, Kevin <kevin.tian@...el.com>
> Sent: Wednesday, August 9, 2023 4:58 PM
> 
> > From: Liu, Yi L <yi.l.liu@...el.com>
> > Sent: Wednesday, August 9, 2023 4:50 PM
> >
> > > From: Tian, Kevin <kevin.tian@...el.com>
> > > Sent: Wednesday, August 9, 2023 4:23 PM
> > >
> > > > From: Nicolin Chen <nicolinc@...dia.com>
> > > > Sent: Wednesday, August 9, 2023 1:42 AM
> > > >
> > > > On Tue, Aug 08, 2023 at 09:34:03AM -0300, Jason Gunthorpe wrote:
> > > > > On Mon, Aug 07, 2023 at 08:12:37PM -0700, Nicolin Chen wrote:
> > > > > > On Mon, Aug 07, 2023 at 03:08:29PM +0000, Liu, Yi L wrote:
> > > > > > > > > From: Liu, Yi L <yi.l.liu@...el.com>
> > > > > > > > > Sent: Monday, July 24, 2023 7:14 PM
> > > > > > > > >
> > > > > > > > > +static int intel_nested_cache_invalidate_user(struct
> > > > iommu_domain
> > > > > > > > > *domain,
> > > > > > > > > +                                         void *user_data)
> > > > > > > > > +{
> > > > > > > > > +   struct iommu_hwpt_vtd_s1_invalidate_desc *req = user_data;
> > > > > > > > > +   struct iommu_hwpt_vtd_s1_invalidate *inv_info = user_data;
> > > > > > > > > +   struct dmar_domain *dmar_domain =
> > to_dmar_domain(domain);
> > > > > > > > > +   unsigned int entry_size = inv_info->entry_size;
> > > > > > > > > +   u64 uptr = inv_info->inv_data_uptr;
> > > > > > > > > +   u64 nr_uptr = inv_info->entry_nr_uptr;
> > > > > > > > > +   struct device_domain_info *info;
> > > > > > > > > +   u32 entry_nr, index;
> > > > > > > > > +   unsigned long flags;
> > > > > > > > > +   int ret = 0;
> > > > > > > > > +
> > > > > > > > > +   if (get_user(entry_nr, (uint32_t __user
> > > > *)u64_to_user_ptr(nr_uptr)))
> > > > > > > > > +           return -EFAULT;
> > > > > > > > > +
> > > > > > > > > +   for (index = 0; index < entry_nr; index++) {
> > > > > > > > > +           ret = copy_struct_from_user(req, sizeof(*req),
> > > > > > > > > +                                       u64_to_user_ptr(uptr + index *
> > > > > > > > > entry_size),
> > > > > > > > > +                                       entry_size);
> > > > > > > >
> > > > > > > > If continuing this direction then the driver should also check minsz
> > etc.
> > > > > > > > for struct iommu_hwpt_vtd_s1_invalidate and
> > > > iommu_hwpt_vtd_s1_invalidate_desc
> > > > > > > > since they are uAPI and subject to change.
> > > > > > >
> > > > > > > Then needs to define size in the uapi data structure, and copy size
> > first
> > > > and
> > > > > > > check minsz before going forward. How about the structures for
> > hwpt
> > > > alloc
> > > > > > > like struct iommu_hwpt_vtd_s1? Should check minsz for them as
> > well?
> > > > > >
> > > > > > Assuming that every uAPI data structure needs a min_size, we can
> > > > > > either add a structure holding all min_sizes like iommufd main.c
> > > > > > or have another xx_min_len in iommu_/domain_ops.
> > > > >
> > > > > If driver is doing the copy it is OK that driver does the min_size
> > > > > check too
> > > >
> > > > Ah, just realized my reply above was missing a context..
> > > >
> > > > Yi and I are having a concern that the core iommu_hpwt_alloc()
> > > > and iommu_hwpt_cache_invalidate(), in the nesting series, copy
> > > > data without checking the min_sizes. So, we are trying to add
> > > > the missing piece into the next version but not sure which way
> > > > could be optimal.
> > > >
> > > > It probably makes sense to add cache_invalidate_user_min_len
> > > > next to the existing cache_invalidate_user_data_len. For the
> > > > iommu_hwpt_alloc, we are missing a data_len, as the core just
> > > > uses sizeof(union iommu_domain_user_data) when calling the
> > > > copy_struct_from_user().
> > > >
> > > > Perhaps we could add two pairs of data_len/min_len in the ops
> > > > structs:
> > > > 	// iommu_ops
> > > > 	const size_t domain_alloc_user_data_len; // for sanity&copy
> > > > 	const size_t domain_alloc_user_min_len; // for sanity only
> > > > 	// iommu_domain_ops
> > > > 	const size_t cache_invalidate_user_data_len; // for sanity&copy
> > > > 	const size_t cache_invalidate_user_min_len; // for sanity only
> > > >
> > >
> > > What about creating a simple array to track type specific len in
> > > iommufd instead of adding more fields to iommu/domain_ops?
> > > anyway it's iommufd doing the copy and all the type specific
> > > structures are already defined in the uapi header.
> >
> > Then index the array with type value, is it? Seems like we have defined
> > such array before for the length of hwpt_alloc and invalidate structures.
> > but finally we dropped it the array may grow largely per new types.
> 
> I'm not sure how many types iommufd will support in reality.
>
> Just my personal feeling that having information contained in iommufd
> is cleaner than expanding iommu core abstraction to assist iommufd
> user buffer copy/verification.
> 
> >
> > >
> > > and a similar example already exists in union ucmd_buffer which
> > > includes type specific structures to avoid memory copy...
> >
> > Not quite get here. ucmd_buffer is a union used to copy any user
> > data. But here we want to check the minsz of the the user data.
> > Seems not related.
> >
> 
> that's the example for recording vendor specific structure information
> in iommufd and it will also grow along with new added types.

Yeah, adding new structures to ucmd_buffer may increase the size as
well if the new one is larger. While for an array, if there is new entry,
it is for sure to increase the size. I remember there is one tricky thing
when handling the selftest type. E.g. it is defined as 0xbadbeef, if using
it to index array, it would expire. So we have some special handling on
it. If defining the things in iommu_ops, it is simpler. Selftest may be
not so critical to determining the direction though.

Regards,
Yi Liu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ