[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <555BB8B6.6070607@gmail.com>
Date: Tue, 19 May 2015 16:27:02 -0600
From: David Ahern <dsahern@...il.com>
To: Namhyung Kim <namhyung@...nel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>
CC: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Jiri Olsa <jolsa@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Adrian Hunter <adrian.hunter@...el.com>,
Andi Kleen <andi@...stfloor.org>,
Frederic Weisbecker <fweisbec@...il.com>,
Stephane Eranian <eranian@...gle.com>
Subject: Re: [PATCH 38/40] perf session: Handle index files generally
On 5/17/15 6:30 PM, Namhyung Kim wrote:
> The current code assumes that the number of index item and cpu are
> matched so it creates that number of threads. But it's not the case
> of non-system-wide session or data came from different machine.
>
> Just creates threads at most number of online cpus and process data.
-----8<-----
> @@ -1717,6 +1742,7 @@ int perf_session__process_events_mt(struct perf_session *session, void *arg)
> int err, i, k;
> int nr_index = session->header.nr_index;
> u64 size = perf_data_file__size(file);
> + int nr_thread = sysconf(_SC_NPROCESSORS_ONLN);
It's not clear to me how this multi-threaded perf is going to work on
large systems especially this patch if a system has holes in the active
cpus. e.g,
# lscpu
Architecture: sparc64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Big Endian
CPU(s): 704
On-line CPU(s) list:
32-63,128-223,256-351,384-479,576-831,864-927,960-1023
Thread(s) per core: 12
Core(s) per socket: 18
Socket(s): 3
NUMA node(s): 4
NUMA node0 CPU(s): 32-63,128-223
NUMA node1 CPU(s): 256-351,384-479
NUMA node2 CPU(s): 576-767
NUMA node3 CPU(s): 768-831,864-927,960-1023
So you are going to spawn 704 threads? Each thread handles a per-cpu buffer?
yes, I still need to find time to take if for a test drive; maybe by the
end of the week.
David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists