[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <BN9PR11MB52768197516FB895146A12078CB82@BN9PR11MB5276.namprd11.prod.outlook.com>
Date: Mon, 21 Apr 2025 08:37:40 +0000
From: "Tian, Kevin" <kevin.tian@...el.com>
To: Nicolin Chen <nicolinc@...dia.com>, "jgg@...dia.com" <jgg@...dia.com>,
"corbet@....net" <corbet@....net>, "will@...nel.org" <will@...nel.org>
CC: "robin.murphy@....com" <robin.murphy@....com>, "joro@...tes.org"
<joro@...tes.org>, "thierry.reding@...il.com" <thierry.reding@...il.com>,
"vdumpa@...dia.com" <vdumpa@...dia.com>, "jonathanh@...dia.com"
<jonathanh@...dia.com>, "shuah@...nel.org" <shuah@...nel.org>,
"praan@...gle.com" <praan@...gle.com>, "nathan@...nel.org"
<nathan@...nel.org>, "peterz@...radead.org" <peterz@...radead.org>, "Liu, Yi
L" <yi.l.liu@...el.com>, "jsnitsel@...hat.com" <jsnitsel@...hat.com>,
"mshavit@...gle.com" <mshavit@...gle.com>, "zhangzekun11@...wei.com"
<zhangzekun11@...wei.com>, "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>, "linux-tegra@...r.kernel.org"
<linux-tegra@...r.kernel.org>, "linux-kselftest@...r.kernel.org"
<linux-kselftest@...r.kernel.org>, "patches@...ts.linux.dev"
<patches@...ts.linux.dev>
Subject: RE: [PATCH v1 15/16] iommu/tegra241-cmdqv: Add user-space use support
> From: Nicolin Chen <nicolinc@...dia.com>
> Sent: Friday, April 11, 2025 2:38 PM
>
> Add the support via vIOMMU infrastructure for virtualization use case.
>
> This basically allows VMM to allocate VINTFs (as a vIOMMU object) and
> assign VCMDQs (vCMDQ objects) to it. A VINTF's MMIO page0 can be
> mmap'd
> to user space for VM to access directly without VMEXIT and corresponding
> hypercall.
it'd be helpful to add a bit more context, e.g. what page0 contains, sid
slots, vcmdq_base (mapped in s2), etc. so it's easier for one to understand
it from start instead of by reading the code.
>
> As an initial version, the number of VCMDQs per VINTF is fixed to two.
so an user could map both VCMDQs of an VINTF even when only one
VCMDQ is created, given the entire 64K page0 is legible for mmap once
the VINTF is associated to a viommu?
no security issue given the VINTF is not shared, but conceptually if
feasible (e.g. two CMDQ's MMIO ranges sits in different 4k pages of
VINTF page0) does it make sense to do per-VCMDQ mmap control
and return mmap info at VCMDQ alloc?
> +static struct iommufd_vcmdq *
> +tegra241_cmdqv_vcmdq_alloc(struct iommufd_viommu *viommu,
> + const struct iommu_user_data *user_data)
> +{
> + struct tegra241_vintf *vintf = viommu_to_vintf(viommu);
> + struct tegra241_cmdqv *cmdqv = vintf->cmdqv;
> + struct iommu_vcmdq_tegra241_cmdqv arg;
> + struct tegra241_vcmdq *vcmdq;
> + phys_addr_t q_base;
> + char header[64];
> + int ret;
> +
> + ret = iommu_copy_struct_from_user(
> + &arg, user_data, IOMMU_VCMDQ_TYPE_TEGRA241_CMDQV,
> vcmdq_base);
> + if (ret)
> + return ERR_PTR(ret);
> +
> + if (!arg.vcmdq_base || arg.vcmdq_base & ~VCMDQ_ADDR)
> + return ERR_PTR(-EINVAL);
> + if (!arg.vcmdq_log2size || arg.vcmdq_log2size > VCMDQ_LOG2SIZE)
> + return ERR_PTR(-EINVAL);
> + if (arg.vcmdq_id >= cmdqv->num_lvcmdqs_per_vintf)
> + return ERR_PTR(-EINVAL);
> + q_base = arm_smmu_domain_ipa_to_pa(vintf->s2_domain,
> arg.vcmdq_base);
> + if (!q_base)
> + return ERR_PTR(-EINVAL);
> +
> + if (vintf->lvcmdqs[arg.vcmdq_id]) {
> + vcmdq = vintf->lvcmdqs[arg.vcmdq_id];
> +
> + /* deinit the previous setting as a reset, before re-init */
> + tegra241_vcmdq_hw_deinit(vcmdq);
> +
> + vcmdq->cmdq.q.q_base = q_base & VCMDQ_ADDR;
> + vcmdq->cmdq.q.q_base |= arg.vcmdq_log2size;
> + tegra241_vcmdq_hw_init_user(vcmdq);
> +
> + return &vcmdq->core;
> + }
why not returning -EBUSY here?
> +
> + vcmdq = iommufd_vcmdq_alloc(viommu, struct tegra241_vcmdq,
> core);
> + if (!vcmdq)
> + return ERR_PTR(-ENOMEM);
> +
> + ret = tegra241_vintf_init_lvcmdq(vintf, arg.vcmdq_id, vcmdq);
> + if (ret)
> + goto free_vcmdq;
> + dev_dbg(cmdqv->dev, "%sallocated\n",
> + lvcmdq_error_header(vcmdq, header, 64));
> +
> + vcmdq->cmdq.q.q_base = q_base & VCMDQ_ADDR;
> + vcmdq->cmdq.q.q_base |= arg.vcmdq_log2size;
could the queue size be multiple pages? there is no guarantee
that the HPA of guest queue would be contiguous :/
Powered by blists - more mailing lists