[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BY5PR12MB3764F88D41104D35BB11FDEBB3F49@BY5PR12MB3764.namprd12.prod.outlook.com>
Date: Thu, 21 Apr 2022 16:34:29 +0000
From: Krishna Reddy <vdumpa@...dia.com>
To: Ashish Mhetre <amhetre@...dia.com>,
"thierry.reding@...il.com" <thierry.reding@...il.com>,
"will@...nel.org" <will@...nel.org>,
"robin.murphy@....com" <robin.murphy@....com>,
"joro@...tes.org" <joro@...tes.org>,
Jonathan Hunter <jonathanh@...dia.com>,
"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC: Sachin Nikam <Snikam@...dia.com>,
Nicolin Chen <nicolinc@...dia.com>,
Pritesh Raithatha <praithatha@...dia.com>
Subject: RE: [Patch v2] iommu: arm-smmu: disable large page mappings for
Nvidia arm-smmu
> Tegra194 and Tegra234 SoCs have the erratum that causes walk cache entries to
> not be invalidated correctly. The problem is that the walk cache index generated
> for IOVA is not same across translation and invalidation requests. This is leading
> to page faults when PMD entry is released during unmap and populated with
> new PTE table during subsequent map request. Disabling large page mappings
> avoids the release of PMD entry and avoid translations seeing stale PMD entry in
> walk cache.
> Fix this by limiting the page mappings to PAGE_SIZE for Tegra194 and
> Tegra234 devices. This is recommended fix from Tegra hardware design team.
>
> Co-developed-by: Pritesh Raithatha <praithatha@...dia.com>
> Signed-off-by: Pritesh Raithatha <praithatha@...dia.com>
> Signed-off-by: Ashish Mhetre <amhetre@...dia.com>
> ---
> Changes in v2:
> - Using init_context() to override pgsize_bitmap instead of new function
>
> drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c | 30
> ++++++++++++++++++++
> 1 file changed, 30 insertions(+)
>
> diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c
> b/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c
> index 01e9b50b10a1..87bf522b9d2e 100644
> --- a/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c
> +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c
> @@ -258,6 +258,34 @@ static void nvidia_smmu_probe_finalize(struct
> arm_smmu_device *smmu, struct devi
> dev_name(dev), err);
> }
>
> +static int nvidia_smmu_init_context(struct arm_smmu_domain
> *smmu_domain,
> + struct io_pgtable_cfg *pgtbl_cfg,
> + struct device *dev)
> +{
> + struct arm_smmu_device *smmu = smmu_domain->smmu;
> + const struct device_node *np = smmu->dev->of_node;
> +
> + /*
> + * Tegra194 and Tegra234 SoCs have the erratum that causes walk
> cache
> + * entries to not be invalidated correctly. The problem is that the walk
> + * cache index generated for IOVA is not same across translation and
> + * invalidation requests. This is leading to page faults when PMD entry
> + * is released during unmap and populated with new PTE table during
> + * subsequent map request. Disabling large page mappings avoids the
> + * release of PMD entry and avoid translations seeing stale PMD entry in
> + * walk cache.
> + * Fix this by limiting the page mappings to PAGE_SIZE on Tegra194 and
> + * Tegra234.
> + */
> + if (of_device_is_compatible(np, "nvidia,tegra234-smmu") ||
> + of_device_is_compatible(np, "nvidia,tegra194-smmu")) {
> + smmu->pgsize_bitmap = PAGE_SIZE;
> + pgtbl_cfg->pgsize_bitmap = smmu->pgsize_bitmap;
> + }
> +
> + return 0;
> +}
> +
> static const struct arm_smmu_impl nvidia_smmu_impl = {
> .read_reg = nvidia_smmu_read_reg,
> .write_reg = nvidia_smmu_write_reg,
> @@ -268,10 +296,12 @@ static const struct arm_smmu_impl
> nvidia_smmu_impl = {
> .global_fault = nvidia_smmu_global_fault,
> .context_fault = nvidia_smmu_context_fault,
> .probe_finalize = nvidia_smmu_probe_finalize,
> + .init_context = nvidia_smmu_init_context,
> };
>
> static const struct arm_smmu_impl nvidia_smmu_single_impl = {
> .probe_finalize = nvidia_smmu_probe_finalize,
> + .init_context = nvidia_smmu_init_context,
> };
>
Reviewed-by: Krishna Reddy <vdumpa@...dia.com>
-KR
Powered by blists - more mailing lists