[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BN9PR11MB5276C84452E56AB03B819D208C05A@BN9PR11MB5276.namprd11.prod.outlook.com>
Date: Mon, 31 Jul 2023 06:31:20 +0000
From: "Tian, Kevin" <kevin.tian@...el.com>
To: Jason Gunthorpe <jgg@...dia.com>, "Liu, Yi L" <yi.l.liu@...el.com>
CC: "joro@...tes.org" <joro@...tes.org>,
"alex.williamson@...hat.com" <alex.williamson@...hat.com>,
"robin.murphy@....com" <robin.murphy@....com>,
"baolu.lu@...ux.intel.com" <baolu.lu@...ux.intel.com>,
"cohuck@...hat.com" <cohuck@...hat.com>,
"eric.auger@...hat.com" <eric.auger@...hat.com>,
"nicolinc@...dia.com" <nicolinc@...dia.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"mjrosato@...ux.ibm.com" <mjrosato@...ux.ibm.com>,
"chao.p.peng@...ux.intel.com" <chao.p.peng@...ux.intel.com>,
"yi.y.sun@...ux.intel.com" <yi.y.sun@...ux.intel.com>,
"peterx@...hat.com" <peterx@...hat.com>,
"jasowang@...hat.com" <jasowang@...hat.com>,
"shameerali.kolothum.thodi@...wei.com"
<shameerali.kolothum.thodi@...wei.com>,
"lulu@...hat.com" <lulu@...hat.com>,
"suravee.suthikulpanit@....com" <suravee.suthikulpanit@....com>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
"Duan, Zhenzhong" <zhenzhong.duan@...el.com>
Subject: RE: [PATCH v3 08/17] iommufd: IOMMU_HWPT_ALLOC allocation with user
data
> From: Jason Gunthorpe <jgg@...dia.com>
> Sent: Saturday, July 29, 2023 1:56 AM
>
> On Mon, Jul 24, 2023 at 04:03:57AM -0700, Yi Liu wrote:
>
> > + switch (pt_obj->type) {
> > + case IOMMUFD_OBJ_IOAS:
> > + ioas = container_of(pt_obj, struct iommufd_ioas, obj);
> > + break;
> > + case IOMMUFD_OBJ_HW_PAGETABLE:
> > + /* pt_id points HWPT only when hwpt_type
> is !IOMMU_HWPT_TYPE_DEFAULT */
> > + if (cmd->hwpt_type == IOMMU_HWPT_TYPE_DEFAULT) {
> > + rc = -EINVAL;
> > + goto out_put_pt;
> > + }
> > +
> > + parent = container_of(pt_obj, struct iommufd_hw_pagetable,
> obj);
> > + /*
> > + * Cannot allocate user-managed hwpt linking to
> auto_created
> > + * hwpt. If the parent hwpt is already a user-managed hwpt,
> > + * don't allocate another user-managed hwpt linking to it.
> > + */
> > + if (parent->auto_domain || parent->parent) {
> > + rc = -EINVAL;
> > + goto out_put_pt;
> > + }
> > + ioas = parent->ioas;
>
> Why do we set ioas here? I would think it should be NULL.
>
> I think it is looking like a mistake to try and re-use
> iommufd_hw_pagetable_alloc() directly for the nested case. It should
> not have a IOAS argument, it should not do enforce_cc, or iopt_*
> functions
enforce_cc is still required since it's about memory accesses post
page table walking (no matter the walked table is single stage or
nested).
>
> So must of the function is removed. Probably better to make a new
> ioas-less function for the nesting case.
>
> > diff --git a/drivers/iommu/iommufd/main.c
> b/drivers/iommu/iommufd/main.c
> > index 510db114fc61..5f4420626421 100644
> > --- a/drivers/iommu/iommufd/main.c
> > +++ b/drivers/iommu/iommufd/main.c
> > @@ -426,7 +426,7 @@ static const struct iommufd_ioctl_op
> iommufd_ioctl_ops[] = {
> > IOCTL_OP(IOMMU_GET_HW_INFO, iommufd_get_hw_info, struct
> iommu_hw_info,
> > __reserved),
> > IOCTL_OP(IOMMU_HWPT_ALLOC, iommufd_hwpt_alloc, struct
> iommu_hwpt_alloc,
> > - __reserved),
> > + data_uptr),
>
> Nono, these can never change once we put them it. It specifies the
> hard minimum size that userspace must provide. If userspace gives less
> than this then the ioctl always fails. Changing it breaks all the
> existing software.
>
> The core code ensures that the trailing part of the cmd struct is
> zero'd the extended implementation must cope with Zero'd values, which
> this does.
>
Ideally expanding uAPI structure size should come with new flag bits.
this one might be a bit special. hwpt_alloc() series was just queued to
iommufd-next. If the nesting series could come together in one cycle
then probably changing it in current way is fine since there is no
existing software. Otherwise we need follow common practice. 😊
Powered by blists - more mailing lists