[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <6229ca46-bae6-2dfe-184c-534e30d303ab@linux.vnet.ibm.com>
Date: Wed, 19 Dec 2018 11:36:36 +0530
From: Madhavan Srinivasan <maddy@...ux.vnet.ibm.com>
To: Anju T Sudhakar <anju@...ux.vnet.ibm.com>, mpe@...erman.id.au,
linux-kernel@...r.kernel.org
Cc: linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH v2 2/5] powerpc/perf: Rearrange setting of ldbar for
thread-imc
On 14/12/18 2:41 PM, Anju T Sudhakar wrote:
> LDBAR holds the memory address allocated for each cpu. For thread-imc
> the mode bit (i.e bit 1) of LDBAR is set to accumulation.
> Currently, ldbar is loaded with per cpu memory address and mode set to
> accumulation at boot time.
>
> To enable trace-imc, the mode bit of ldbar should be set to 'trace'. So to
> accommodate trace-mode of IMC, reposition setting of ldbar for thread-imc
> to thread_imc_event_add(). Also reset ldbar at thread_imc_event_del().
Changes looks fine to me.
Reviewed-by: Madhavan Srinivasan <maddy@...ux.vnet.ibm.com>
> Signed-off-by: Anju T Sudhakar <anju@...ux.vnet.ibm.com>
> ---
> arch/powerpc/perf/imc-pmu.c | 28 +++++++++++++++++-----------
> 1 file changed, 17 insertions(+), 11 deletions(-)
>
> diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
> index f292a3f284f1..3bef46f8417d 100644
> --- a/arch/powerpc/perf/imc-pmu.c
> +++ b/arch/powerpc/perf/imc-pmu.c
> @@ -806,8 +806,11 @@ static int core_imc_event_init(struct perf_event *event)
> }
>
> /*
> - * Allocates a page of memory for each of the online cpus, and write the
> - * physical base address of that page to the LDBAR for that cpu.
> + * Allocates a page of memory for each of the online cpus, and load
> + * LDBAR with 0.
> + * The physical base address of the page allocated for a cpu will be
> + * written to the LDBAR for that cpu, when the thread-imc event
> + * is added.
> *
> * LDBAR Register Layout:
> *
> @@ -825,7 +828,7 @@ static int core_imc_event_init(struct perf_event *event)
> */
> static int thread_imc_mem_alloc(int cpu_id, int size)
> {
> - u64 ldbar_value, *local_mem = per_cpu(thread_imc_mem, cpu_id);
> + u64 *local_mem = per_cpu(thread_imc_mem, cpu_id);
> int nid = cpu_to_node(cpu_id);
>
> if (!local_mem) {
> @@ -842,9 +845,7 @@ static int thread_imc_mem_alloc(int cpu_id, int size)
> per_cpu(thread_imc_mem, cpu_id) = local_mem;
> }
>
> - ldbar_value = ((u64)local_mem & THREAD_IMC_LDBAR_MASK) | THREAD_IMC_ENABLE;
> -
> - mtspr(SPRN_LDBAR, ldbar_value);
> + mtspr(SPRN_LDBAR, 0);
> return 0;
> }
>
> @@ -995,6 +996,7 @@ static int thread_imc_event_add(struct perf_event *event, int flags)
> {
> int core_id;
> struct imc_pmu_ref *ref;
> + u64 ldbar_value, *local_mem = per_cpu(thread_imc_mem, smp_processor_id());
>
> if (flags & PERF_EF_START)
> imc_event_start(event, flags);
> @@ -1003,6 +1005,9 @@ static int thread_imc_event_add(struct perf_event *event, int flags)
> return -EINVAL;
>
> core_id = smp_processor_id() / threads_per_core;
> + ldbar_value = ((u64)local_mem & THREAD_IMC_LDBAR_MASK) | THREAD_IMC_ENABLE;
> + mtspr(SPRN_LDBAR, ldbar_value);
> +
> /*
> * imc pmus are enabled only when it is used.
> * See if this is triggered for the first time.
> @@ -1034,11 +1039,7 @@ static void thread_imc_event_del(struct perf_event *event, int flags)
> int core_id;
> struct imc_pmu_ref *ref;
>
> - /*
> - * Take a snapshot and calculate the delta and update
> - * the event counter values.
> - */
> - imc_event_update(event);
> + mtspr(SPRN_LDBAR, 0);
>
> core_id = smp_processor_id() / threads_per_core;
> ref = &core_imc_refc[core_id];
> @@ -1057,6 +1058,11 @@ static void thread_imc_event_del(struct perf_event *event, int flags)
> ref->refc = 0;
> }
> mutex_unlock(&ref->lock);
> + /*
> + * Take a snapshot and calculate the delta and update
> + * the event counter values.
> + */
> + imc_event_update(event);
> }
>
> /* update_pmu_ops : Populate the appropriate operations for "pmu" */
Powered by blists - more mailing lists