[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <71cdc0ec-1b58-69c1-eaca-631800774c13@linux.ibm.com>
Date: Thu, 26 Aug 2021 13:26:08 +0530
From: kajoljain <kjain@...ux.ibm.com>
To: Song Liu <songliubraving@...com>,
Peter Zijlstra <peterz@...radead.org>
Cc: "open list:BPF (Safe dynamic programs and tools)"
<bpf@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Kernel Team <Kernel-team@...com>
Subject: Re: [PATCH bpf-next 1/3] perf: enable branch record for software
events
On 8/25/21 8:52 PM, Song Liu wrote:
>
>
>> On Aug 25, 2021, at 5:09 AM, Peter Zijlstra <peterz@...radead.org> wrote:
>>
>> On Mon, Aug 23, 2021 at 11:01:55PM -0700, Song Liu wrote:
>>
>>> arch/x86/events/intel/core.c | 5 ++++-
>>> arch/x86/events/intel/lbr.c | 12 ++++++++++++
>>> arch/x86/events/perf_event.h | 2 ++
>>> include/linux/perf_event.h | 33 +++++++++++++++++++++++++++++++++
>>> kernel/events/core.c | 28 ++++++++++++++++++++++++++++
>>> 5 files changed, 79 insertions(+), 1 deletion(-)
>>
>> No PowerPC support :/
>
> I don't have PowerPC system for testing at the moment. I guess we can decide
> the overall framework now, and ask PowerPC folks' help on PowerPC support
> later?
Hi Song,
I will look at powerpc side to enable this.
Thanks,
Kajol Jain
>
>>
>>> +void intel_pmu_snapshot_branch_stack(void)
>>> +{
>>> + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
>>> +
>>> + intel_pmu_lbr_disable_all();
>>> + intel_pmu_lbr_read();
>>> + memcpy(this_cpu_ptr(&perf_branch_snapshot_entries), cpuc->lbr_entries,
>>> + sizeof(struct perf_branch_entry) * x86_pmu.lbr_nr);
>>> + *this_cpu_ptr(&perf_branch_snapshot_size) = x86_pmu.lbr_nr;
>>> + intel_pmu_lbr_enable_all(false);
>>> +}
>>
>> Still has the layering violation and issues vs PMI.
>
> Yes, this is the biggest change after I test with this more. I tested with
> perf_[disable|enable]_pmu(), and function pointer in "struct pmu". However,
> all these logic consumes LBR entries. In one of the version, 22 out of the
> 32 LBR entries are branches after the fexit event. Most of them are from
> perf_disable_pmu(). And each function pointer consumes 1 or 2 entries.
> This would be worse for systems with fewer LBR entries.
>
> On the other hand, I think current version was not too bad. It may corrupt
> some samples when there is collision between this and PMI. But it should not
> cause serious issues. Did I miss anything more serious?
>
>>
>>> +#ifdef CONFIG_HAVE_STATIC_CALL
>>> +DECLARE_STATIC_CALL(perf_snapshot_branch_stack,
>>> + perf_default_snapshot_branch_stack);
>>> +#else
>>> +extern void (*perf_snapshot_branch_stack)(void);
>>> +#endif
>>
>> That's weird, static call should work unconditionally, and fall back to
>> a regular function pointer exactly like you do here. Search for:
>> "Generic Implementation" in include/linux/static_call.h
>
> Thanks for the pointer. Let me look into it.
>>
>>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>>> index 011cc5069b7ba..b42cc20451709 100644
>>> --- a/kernel/events/core.c
>>> +++ b/kernel/events/core.c
>>
>>> +#ifdef CONFIG_HAVE_STATIC_CALL
>>> +DEFINE_STATIC_CALL(perf_snapshot_branch_stack,
>>> + perf_default_snapshot_branch_stack);
>>> +#else
>>> +void (*perf_snapshot_branch_stack)(void) = perf_default_snapshot_branch_stack;
>>> +#endif
>>
>> Idem.
>>
>> Something like:
>>
>> DEFINE_STATIC_CALL_NULL(perf_snapshot_branch_stack, void (*)(void));
>>
>> with usage like: static_call_cond(perf_snapshot_branch_stack)();
>>
>> Should unconditionally work.
>>
>>> +int perf_read_branch_snapshot(void *buf, size_t len)
>>> +{
>>> + int cnt;
>>> +
>>> + memcpy(buf, *this_cpu_ptr(&perf_branch_snapshot_entries),
>>> + min_t(u32, (u32)len,
>>> + sizeof(struct perf_branch_entry) * MAX_BRANCH_SNAPSHOT));
>>> + cnt = *this_cpu_ptr(&perf_branch_snapshot_size);
>>> +
>>> + return (cnt > 0) ? cnt : -EOPNOTSUPP;
>>> +}
>>
>> Doesn't seem used at all..
>
> At the moment, we only use this from BPF side (see 2/3). We sure can use it
> from perf side, but that would require discussions on the user interface.
> How about we have that discussion later?
>
> Thanks,
> Song
>
Powered by blists - more mailing lists