[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZS7TAr1bpOfkeNuv@kernel.org>
Date: Tue, 17 Oct 2023 15:31:30 -0300
From: Arnaldo Carvalho de Melo <acme@...nel.org>
To: Ingo Molnar <mingo@...nel.org>
Cc: Namhyung Kim <namhyung@...nel.org>, Jiri Olsa <jolsa@...nel.org>,
Ian Rogers <irogers@...gle.com>,
Adrian Hunter <adrian.hunter@...el.com>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-perf-users@...r.kernel.org
Subject: Re: [perf stat] Extend --cpu to non-system-wide runs too? was Re:
[PATCH v3] perf bench sched pipe: Add -G/--cgroups option
Em Tue, Oct 17, 2023 at 02:43:45PM +0200, Ingo Molnar escreveu:
> * Arnaldo Carvalho de Melo <acme@...nel.org> wrote:
> > Em Tue, Oct 17, 2023 at 01:40:07PM +0200, Ingo Molnar escreveu:
> > > Side note: it might make sense to add a sane cpumask/affinity setting
> > > option to perf stat itself:
> > > perf stat --cpumask
> > > ... or so?
> > > We do have -C:
> > > -C, --cpu <cpu> list of cpus to monitor in system-wide
> > > ... but that's limited to --all-cpus, right?
> > > Perhaps we could extend --cpu to non-system-wide runs too?
> > Maybe I misunderstood your question, but its a list of cpus to limit the
> > counting:
> Ok.
> So I thought that "--cpumask mask/list/etc" should simply do what 'taskset'
> is doing: using the sched_setaffinity() syscall to make the current
> workload and all its children.
> There's impact on perf stat itself: it could just call sched_setaffinity()
> early on, and not bother about it?
> Having it built-in into perf would simply make it easier to not forget
> running 'taskset'. :-)
Would that be the only advantage?
I think using taskset isn't that much of a burden and keeps with the
Unix tradition, no? :-\
See, using 'perf record -C', i.e. sampling, will use sched_setaffinity,
and in that case there is a clear advantage... wait, this train of
thought made me remember something, but its just about counter setup,
not about the workload:
[acme@...e perf-tools-next]$ grep affinity__set tools/perf/*.c
tools/perf/builtin-stat.c: else if (affinity__setup(&saved_affinity) < 0)
tools/perf/builtin-stat.c: if (affinity__setup(&saved_affinity) < 0)
[acme@...e perf-tools-next]$
/*
* perf_event_open does an IPI internally to the target CPU.
* It is more efficient to change perf's affinity to the target
* CPU and then set up all events on that CPU, so we amortize
* CPU communication.
*/
void affinity__set(struct affinity *a, int cpu)
[root@...e ~]# perf trace --summary -e sched_setaffinity perf stat -e cycles -a sleep 1
Performance counter stats for 'system wide':
6,319,186,681 cycles
1.002665795 seconds time elapsed
Summary of events:
perf (24307), 396 events, 87.4%
syscall calls errors total min avg max stddev
(msec) (msec) (msec) (msec) (%)
--------------- -------- ------ -------- --------- --------- --------- ------
sched_setaffinity 198 0 4.544 0.006 0.023 0.042 2.30%
[root@...e ~]#
[root@...e ~]# perf trace --summary -e sched_setaffinity perf stat -C 1 -e cycles -a sleep 1
Performance counter stats for 'system wide':
105,311,506 cycles
1.001203282 seconds time elapsed
Summary of events:
perf (24633), 24 events, 29.6%
syscall calls errors total min avg max stddev
(msec) (msec) (msec) (msec) (%)
--------------- -------- ------ -------- --------- --------- --------- ------
sched_setaffinity 12 0 0.105 0.005 0.009 0.039 32.07%
[root@...e ~]# perf trace --summary -e sched_setaffinity perf stat -C 1,2 -e cycles -a sleep 1
Performance counter stats for 'system wide':
131,474,375 cycles
1.001324346 seconds time elapsed
Summary of events:
perf (24636), 36 events, 38.7%
syscall calls errors total min avg max stddev
(msec) (msec) (msec) (msec) (%)
--------------- -------- ------ -------- --------- --------- --------- ------
sched_setaffinity 18 0 0.442 0.000 0.025 0.093 24.75%
[root@...e ~]# perf trace --summary -e sched_setaffinity perf stat -C 1,2,30 -e cycles -a sleep 1
Performance counter stats for 'system wide':
191,674,889 cycles
1.001280015 seconds time elapsed
Summary of events:
perf (24639), 48 events, 45.7%
syscall calls errors total min avg max stddev
(msec) (msec) (msec) (msec) (%)
--------------- -------- ------ -------- --------- --------- --------- ------
sched_setaffinity 24 0 0.835 0.000 0.035 0.144 24.40%
[root@...e ~]#
Too much affinity setting :-)
- Arnaldo
Powered by blists - more mailing lists