[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191125092202.GB20575@krava>
Date: Mon, 25 Nov 2019 10:22:02 +0100
From: Jiri Olsa <jolsa@...hat.com>
To: Song Liu <songliubraving@...com>
Cc: open list <linux-kernel@...r.kernel.org>,
Kernel Team <Kernel-team@...com>,
David Carrillo Cisneros <davidca@...com>,
Peter Zijlstra <peterz@...radead.org>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Jiri Olsa <jolsa@...nel.org>,
Alexey Budankov <alexey.budankov@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>, Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH v7] perf: Sharing PMU counters across compatible events
On Fri, Nov 22, 2019 at 07:50:06PM +0000, Song Liu wrote:
>
>
> > On Nov 22, 2019, at 11:33 AM, Jiri Olsa <jolsa@...hat.com> wrote:
> >
> > On Fri, Nov 15, 2019 at 03:55:04PM -0800, Song Liu wrote:
> >> This patch tries to enable PMU sharing. When multiple perf_events are
> >> counting the same metric, they can share the hardware PMU counter. We
> >> call these events as "compatible events".
> >>
> >> The PMU sharing are limited to events within the same perf_event_context
> >> (ctx). When a event is installed or enabled, search the ctx for compatible
> >> events. This is implemented in perf_event_setup_dup(). One of these
> >> compatible events are picked as the master (stored in event->dup_master).
> >> Similarly, when the event is removed or disabled, perf_event_remove_dup()
> >> is used to clean up sharing.
> >>
> >> A new state PERF_EVENT_STATE_ENABLED is introduced for the master event.
> >> This state is used when the slave event is ACTIVE, but the master event
> >> is not.
> >>
> >> On the critical paths (add, del read), sharing PMU counters doesn't
> >> increase the complexity. Helper functions event_pmu_[add|del|read]() are
> >> introduced to cover these cases. All these functions have O(1) time
> >> complexity.
> >>
> >> Cc: Peter Zijlstra <peterz@...radead.org>
> >> Cc: Arnaldo Carvalho de Melo <acme@...hat.com>
> >> Cc: Jiri Olsa <jolsa@...nel.org>
> >> Cc: Alexey Budankov <alexey.budankov@...ux.intel.com>
> >> Cc: Namhyung Kim <namhyung@...nel.org>
> >> Cc: Tejun Heo <tj@...nel.org>
> >> Signed-off-by: Song Liu <songliubraving@...com>
> >>
> >> ---
> >> Changes in v7:
> >> Major rewrite to avoid allocating extra master event.
> >
> > hi,
> > what is this based on? I can't apply it on tip/master:
> >
> > Applying: perf: Sharing PMU counters across compatible events
> > error: patch failed: include/linux/perf_event.h:722
> > error: include/linux/perf_event.h: patch does not apply
> > Patch failed at 0001 perf: Sharing PMU counters across compatible events
> > hint: Use 'git am --show-current-patch' to see the failed patch
> > When you have resolved this problem, run "git am --continue".
> > If you prefer to skip this patch, run "git am --skip" instead.
> > To restore the original branch and stop patching, run "git am --abort".
>
hi,
I'm getting warning below when running 'perf test',
not sure what's the reason yet..
jirka
---
[ 230.228358] WARNING: CPU: 29 PID: 2133 at kernel/events/core.c:3069 __perf_event_enable+0x1d3/0x220^M
[ 230.237395] Modules linked in: intel_rapl_msr intel_rapl_common skx_edac nfit x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm ipmi_ssif irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel intel_cstate intel_uncore ipmi_si dell_smbios intel_rapl_perf iTCO_wdt wmi_bmof dell_wmi_descriptor ipmi_devintf mei_me iTCO_vendor_support dcdbas mei i2c_i801 lpc_ich wmi ipmi_msghandler acpi_power_meter xfs libcrc32c mgag200 drm_kms_helper i2c_algo_bit drm_vram_helper ttm drm megaraid_sas tg3 crc32c_intel^M
[ 230.282700] CPU: 29 PID: 2133 Comm: perf Not tainted 5.4.0-rc7+ #37^M
[ 230.288964] Hardware name: Dell Inc. PowerEdge R440/08CYF7, BIOS 1.7.0 12/14/2018^M
[ 230.296444] RIP: 0010:__perf_event_enable+0x1d3/0x220^M
[ 230.301496] Code: 0f b6 d2 83 c2 01 48 83 b8 b0 00 00 00 00 74 15 5b 44 09 f2 5d 4c 89 ef 41 5c 41 5d 41 5e 41 5f e9 72 fd ff ff 83 ca 08 eb e6 <0f> 0b eb ba 48 8b 45 10 4c 8d 78 f0 4c 39 fd 74 83 49 8b 87 98 00^M
[ 230.320242] RSP: 0018:ffff9b2d87a93bd0 EFLAGS: 00010086^M
[ 230.325468] RAX: ffff88ad77312010 RBX: ffff88ad77312000 RCX: 0000000000000000^M
[ 230.332600] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff88ad77312000^M
[ 230.339731] RBP: ffff88ad77312000 R08: ffff88ad77312000 R09: ffff88ad77312000^M
[ 230.346863] R10: 0000000000000000 R11: 0000000000000000 R12: ffff88ad84b07100^M
[ 230.353997] R13: ffff88ad8f9af700 R14: 0000000000000003 R15: ffff88ad77312000^M
[ 230.361130] FS: 00007fca82c1f100(0000) GS:ffff88ad8f980000(0000) knlGS:0000000000000000^M
[ 230.369217] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033^M
[ 230.374961] CR2: 00007fca82c1cff0 CR3: 00000017f65a2003 CR4: 00000000007606e0^M
[ 230.382095] DR0: 0000000000b65268 DR1: 0000000000000000 DR2: 0000000000000000^M
[ 230.389227] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000600^M
[ 230.396358] PKRU: 55555554^M
[ 230.399071] Call Trace:^M
[ 230.401538] event_function+0x85/0xc0^M
[ 230.405211] ? ctx_resched+0xc0/0xc0^M
[ 230.408789] remote_function+0x3e/0x50^M
[ 230.412547] generic_exec_single+0x91/0xd0^M
[ 230.416652] ? ctx_resched+0xc0/0xc0^M
[ 230.420230] smp_call_function_single+0xd1/0x110^M
[ 230.424852] task_function_call+0x45/0x70^M
[ 230.428872] ? event_sync_dup_count+0x90/0x90^M
[ 230.433239] event_function_call+0x8f/0x110^M
[ 230.437425] ? ctx_resched+0xc0/0xc0^M
[ 230.441003] ? ring_buffer_attach+0x186/0x1b0^M
[ 230.445362] ? _perf_event_disable+0x50/0x50^M
[ 230.449636] perf_event_for_each_child+0x34/0x80^M
[ 230.454256] ? _perf_event_disable+0x50/0x50^M
[ 230.458530] _perf_ioctl+0x24b/0x700^M
[ 230.462118] ? mem_cgroup_commit_charge+0xcb/0x1a0^M
[ 230.466918] ? __handle_mm_fault+0xd49/0x1ac0^M
[ 230.471279] ? _cond_resched+0x15/0x30^M
[ 230.475037] perf_ioctl+0x3d/0x60^M
[ 230.478361] do_vfs_ioctl+0x405/0x660^M
[ 230.482032] ksys_ioctl+0x5e/0x90^M
[ 230.485351] ? __x64_sys_fcntl+0x84/0xb0^M
[ 230.489276] __x64_sys_ioctl+0x16/0x20^M
[ 230.493033] do_syscall_64+0x5b/0x180^M
[ 230.496708] entry_SYSCALL_64_after_hwframe+0x44/0xa9^M
Powered by blists - more mailing lists