[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPqJEFpSkN9fJgNut6bdZUzpTvNp_mikWdCSrE=TNnajf5BRRw@mail.gmail.com>
Date: Tue, 11 Jul 2023 23:08:44 +0800
From: Eric Lin <eric.lin@...ive.com>
To: Ben Dooks <ben.dooks@...ethink.co.uk>
Cc: will@...nel.org, linux-riscv@...ts.infradead.org,
linux-kernel@...r.kernel.org, Conor Dooley <conor@...nel.org>
Subject: Re: [PATCH 2/3] soc: sifive: Add SiFive private L2 cache PMU driver
Hi Ben,
On Tue, Jul 11, 2023 at 4:41 PM Ben Dooks <ben.dooks@...ethink.co.uk> wrote:
>
> On 20/06/2023 04:14, Eric Lin wrote:
> > On Fri, Jun 16, 2023 at 6:13 PM Conor Dooley <conor.dooley@...rochip.com> wrote:
> >>
> >> On Fri, Jun 16, 2023 at 02:32:09PM +0800, Eric Lin wrote:
> >>> From: Greentime Hu <greentime.hu@...ive.com>
> >>>
> >>> This adds SiFive private L2 cache PMU driver. User
> >>> can use perf tool to profile by event name and event id.
> >>>
> >>> Example:
> >>> $ perf stat -C 0 -e /sifive_pl2_pmu/inner_acquire_block_btot/
> >>> -e /sifive_pl2_pmu/inner_acquire_block_ntob/
> >>> -e /sifive_pl2_pmu/inner_acquire_block_ntot/ ls
> >>>
> >>> Performance counter stats for 'CPU(s) 0':
> >>>
> >>> 300 sifive_pl2_pmu/inner_acquire_block_btot/
> >>> 17801 sifive_pl2_pmu/inner_acquire_block_ntob/
> >>> 5253 sifive_pl2_pmu/inner_acquire_block_ntot/
> >>>
> >>> 0.088917326 seconds time elapsed
> >>>
> >>> $ perf stat -C 0 -e /sifive_pl2_pmu/event=0x10001/
> >>> -e /sifive_pl2_pmu/event=0x4001/
> >>> -e /sifive_pl2_pmu/event=0x8001/ ls
> >>>
> >>> Performance counter stats for 'CPU(s) 0':
> >>>
> >>> 251 sifive_pl2_pmu/event=0x10001/
> >>> 2620 sifive_pl2_pmu/event=0x4001/
> >>> 644 sifive_pl2_pmu/event=0x8001/
> >>>
> >>> 0.092827110 seconds time elapsed
> >>>
> >>> Signed-off-by: Greentime Hu <greentime.hu@...ive.com>
> >>> Signed-off-by: Eric Lin <eric.lin@...ive.com>
> >>> Reviewed-by: Zong Li <zong.li@...ive.com>
> >>> Reviewed-by: Nick Hu <nick.hu@...ive.com>
> >>> ---
> >>> drivers/soc/sifive/Kconfig | 9 +
> >>> drivers/soc/sifive/Makefile | 1 +
> >>> drivers/soc/sifive/sifive_pl2.h | 20 +
> >>> drivers/soc/sifive/sifive_pl2_cache.c | 16 +
> >>> drivers/soc/sifive/sifive_pl2_pmu.c | 669 ++++++++++++++++++++++++++
> >>
> >> Perf drivers should be in drivers/perf, no?
> >>
> >
> > Hi Conor,
> >
> > Yes, I see most of the drivers are in the drivers/perf.
> >
> > But I grep perf_pmu_register(), it seems not all the pmu drivers are
> > in drivers/perf as below:
> >
> > arch/arm/mach-imx/mmdc.c:517: ret =
> > perf_pmu_register(&(pmu_mmdc->pmu), name, -1);
> > arch/arm/mm/cache-l2x0-pmu.c:552: ret =
> > perf_pmu_register(l2x0_pmu, l2x0_name, -1);
> > ...
> > drivers/dma/idxd/perfmon.c:627: rc = perf_pmu_register(&idxd_pmu->pmu,
> > idxd_pmu->name, -1);
> > drivers/fpga/dfl-fme-perf.c:904:static int
> > fme_perf_pmu_register(struct platform_device *pdev,
> > drivers/fpga/dfl-fme-perf.c:929: ret = perf_pmu_register(pmu, name, -1);
> > ...
> > drivers/gpu/drm/amd/amdgpu/amdgpu_pmu.c:549: ret =
> > perf_pmu_register(&pmu_entry->pmu, pmu_name, -1);
> > drivers/gpu/drm/i915/i915_pmu.c:1190: ret =
> > perf_pmu_register(&pmu->base, pmu->name, -1);
> > drivers/hwtracing/coresight/coresight-etm-perf.c:907: ret =
> > perf_pmu_register(&etm_pmu, CORESIGHT_ETM_PMU_NAME, -1);
> > drivers/hwtracing/ptt/hisi_ptt.c:895: ret =
> > perf_pmu_register(&hisi_ptt->hisi_ptt_pmu, pmu_name, -1);
> > drivers/iommu/intel/perfmon.c:570: return
> > perf_pmu_register(&iommu_pmu->pmu, iommu_pmu->pmu.name, -1);
> > drivers/nvdimm/nd_perf.c:309: rc = perf_pmu_register(&nd_pmu->pmu,
> > nd_pmu->pmu.name, -1);
> > ...
> >
> > I just wondering what kind of pmu drivers should be in drivers/perf
> > and what kind of pmu drivers should not be in drivers/perf.
> > Thanks.
> >
>
> Given the registers for the l2 cache controls and l2 pmu don't overlap
> do we need the pmu and general cache drivers together?
>
>From Will's suggestion, I'll put the pl2 pmu driver to drivers/perf in
v2. Thanks.
Best Regards,
Eric Lin.
> --
> Ben Dooks http://www.codethink.co.uk/
> Senior Engineer Codethink - Providing Genius
>
> https://www.codethink.co.uk/privacy.html
>
Powered by blists - more mailing lists