[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55F2C726.4080204@huawei.com>
Date: Fri, 11 Sep 2015 20:20:54 +0800
From: "Wangnan (F)" <wangnan0@...wei.com>
To: Arnaldo Carvalho de Melo <acme@...nel.org>,
Kan Liang <kan.liang@...el.com>
CC: Ingo Molnar <mingo@...nel.org>, <linux-kernel@...r.kernel.org>,
"Adrian Hunter" <adrian.hunter@...el.com>,
Borislav Petkov <bp@...e.de>, David Ahern <dsahern@...il.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Jiri Olsa <jolsa@...hat.com>,
Namhyung Kim <namhyung@...nel.org>,
Stephane Eranian <eranian@...gle.com>
Subject: Re: [RFC 00/13] perf_env/CPU socket reorg/fixes
Hi Arnaldo,
I have tested patch 1 to 10. They looks good to me except patch 4/13. Please
see my email in that thread.
However, during the testing I found a limitation related to cpu
online/offline and 'perf top' that, if I offline most of cores before
'perf top', then online them during 'perf top' running, 'perf top'
dooesn't report new CPUs. It still reports the CPUs which are online
when 'perf top' starts consume 100% cycles. So if CPUs are online and
offlined dynamically and there are many CPUs, user of 'perf top' may get
confusion result if he or she doesn't noticed that 'perf top' doesn't
listed all cores they have.
Here is how I did this:
# for i in `seq 2 7` ; do echo 0 >
/sys/devices/system/cpu/cpu$i/online ; done
# perf top -s cpu,socket
The result is something like:
Samples: 28K of event 'cycles', Event count (approx.): 23640606383
Overhead Socket CPU
67.14% 000 000
32.86% 000 001
Then online them:
# for i in `seq 2 7` ; do echo 1 >
/sys/devices/system/cpu/cpu$i/online ; done
After a while, 'perf top' still reports two CPUs.
Samples: 400K of event 'cycles', Event count (approx.): 38728257939
Overhead Socket CPU
51.02% 000 001
48.98% 000 000
And another 'perf top' report correct result:
Samples: 28K of event 'cycles', Event count (approx.): 24741565854
Overhead Socket CPU
27.26% 000 005
21.07% 000 002
13.07% 000 001
12.69% 000 000
8.07% 000 006
6.75% 000 007
5.64% 000 004
5.45% 000 003
However, It is relatively a rare case. I don't think we have to fix it
in this
patchset.
Thank you.
On 2015/9/10 3:50, Arnaldo Carvalho de Melo wrote:
> Hi,
>
> Please take a look at these changes to fix the problems reported by
> Wang Nan wrt accesses to the cpu_topology_map information.
>
> The fixes are present on these following two csets:
>
> perf event: Use machine->env to find the cpu -> socket mapping
> perf report: Do not blindly use env->cpu[al.cpu].socket_id
>
> The rest are fixes made while working on this, infrastructure to enable
> the fixes, reverts for things that ended up not being necessary and some
> cleanups.
>
> It is available at:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git perf/env
>
> Please let me know if I can have your Acked-by, Tested-by or
> Reviewed-by.
>
> - Arnaldo
>
> Arnaldo Carvalho de Melo (13):
> perf env: Move perf_env out of header.h and session.c into separate object
> perf env: Rename some leftovers from rename to perf_env
> perf env: Adopt perf_header__set_cmdline
> perf env: Introduce read_cpu_topology_map() method
> perf sort: Set flag stating if the "socket" key is being used
> perf top: Cache the cpu topology info when "-s socket" is used
> perf hists browser: Fixup the "cpu" column width calculation
> perf machine: Add pointer to sample's environment
> perf event: Use machine->env to find the cpu -> socket mapping
> perf report: Do not blindly use env->cpu[al.cpu].socket_id
> Revert "perf evsel: Add a backpointer to the evlist a evsel is in"
> perf evsel: Remove forward declaration of 'struct perf_evlist'
> Revert "perf evlist: Add backpointer for perf_env to evlist"
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists