[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210126134128.1368-2-thunder.leizhen@huawei.com>
Date: Tue, 26 Jan 2021 21:41:26 +0800
From: Zhen Lei <thunder.leizhen@...wei.com>
To: Will Deacon <will@...nel.org>, Robin Murphy <robin.murphy@....com>,
"Mark Rutland" <mark.rutland@....com>,
Joerg Roedel <joro@...tes.org>,
linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
iommu <iommu@...ts.linux-foundation.org>,
linux-kernel <linux-kernel@...r.kernel.org>
CC: Zhen Lei <thunder.leizhen@...wei.com>,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>
Subject: [PATCH v2 1/3] perf/smmuv3: Don't reserve the PMCG register spaces
According to the SMMUv3 specification:
Each PMCG counter group is represented by one 4KB page (Page 0) with one
optional additional 4KB page (Page 1), both of which are at IMPLEMENTATION
DEFINED base addresses.
This means that the PMCG register spaces may be within the 64KB pages of
the SMMUv3 register space. When both the SMMU and PMCG drivers reserve
their own resources, a resource conflict occurs.
To avoid this conflict, don't reserve the PMCG regions.
Suggested-by: Robin Murphy <robin.murphy@....com>
Signed-off-by: Zhen Lei <thunder.leizhen@...wei.com>
---
drivers/perf/arm_smmuv3_pmu.c | 27 +++++++++++++++++++++++++--
1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
index 74474bb322c3f26..e5e505a0804fe53 100644
--- a/drivers/perf/arm_smmuv3_pmu.c
+++ b/drivers/perf/arm_smmuv3_pmu.c
@@ -761,6 +761,29 @@ static void smmu_pmu_get_acpi_options(struct smmu_pmu *smmu_pmu)
dev_notice(smmu_pmu->dev, "option mask 0x%x\n", smmu_pmu->options);
}
+static void __iomem *
+smmu_pmu_get_and_ioremap_resource(struct platform_device *pdev,
+ unsigned int index,
+ struct resource **res)
+{
+ void __iomem *base;
+ struct resource *r;
+
+ r = platform_get_resource(pdev, IORESOURCE_MEM, index);
+ if (!r) {
+ dev_err(&pdev->dev, "invalid resource\n");
+ return ERR_PTR(-EINVAL);
+ }
+ if (res)
+ *res = r;
+
+ base = devm_ioremap(&pdev->dev, r->start, resource_size(r));
+ if (!base)
+ return ERR_PTR(-ENOMEM);
+
+ return base;
+}
+
static int smmu_pmu_probe(struct platform_device *pdev)
{
struct smmu_pmu *smmu_pmu;
@@ -793,7 +816,7 @@ static int smmu_pmu_probe(struct platform_device *pdev)
.capabilities = PERF_PMU_CAP_NO_EXCLUDE,
};
- smmu_pmu->reg_base = devm_platform_get_and_ioremap_resource(pdev, 0, &res_0);
+ smmu_pmu->reg_base = smmu_pmu_get_and_ioremap_resource(pdev, 0, &res_0);
if (IS_ERR(smmu_pmu->reg_base))
return PTR_ERR(smmu_pmu->reg_base);
@@ -801,7 +824,7 @@ static int smmu_pmu_probe(struct platform_device *pdev)
/* Determine if page 1 is present */
if (cfgr & SMMU_PMCG_CFGR_RELOC_CTRS) {
- smmu_pmu->reloc_base = devm_platform_ioremap_resource(pdev, 1);
+ smmu_pmu->reloc_base = smmu_pmu_get_and_ioremap_resource(pdev, 1, NULL);
if (IS_ERR(smmu_pmu->reloc_base))
return PTR_ERR(smmu_pmu->reloc_base);
} else {
--
1.8.3
Powered by blists - more mailing lists