[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1298258964.17806.48.camel@minggr.sh.intel.com>
Date: Mon, 21 Feb 2011 11:29:24 +0800
From: Lin Ming <ming.m.lin@...el.com>
To: Andi Kleen <andi@...stfloor.org>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ingo Molnar <mingo@...e.hu>,
Stephane Eranian <eranian@...gle.com>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 5/6] perf: Optimise topology iteration
On Mon, 2011-02-21 at 05:15 +0800, Andi Kleen wrote:
> On Mon, Feb 21, 2011 at 12:57:39AM +0800, Lin Ming wrote:
> > Currently we iterate the full machine looking for a matching core_id/nb
> > for the percore and the amd northbridge stuff , using a smaller topology
> > mask makes sense.
>
> This is still wrong for CPU hotplug. The CPU "owning" the per core
> does not necessarily need to be online anymore.
This is remain issue for hotplug case, no matter we use
for_each_online_cpu or topology_thread_cpumask.
> Please drop this patch.
Re-look at the code, I think for_each_online_cpu is wrong for percore,
we should use topology_thread_cpumask instead.
for_each_online_cpu(i) {
struct intel_percore *pc = per_cpu(cpu_hw_events, i).per_core;
if (pc && pc->core_id == core_id) {
kfree(cpuc->per_core);
cpuc->per_core = pc;
break;
}
}
Assume 2 sockets,
//socket 0
cpu 0: core_id 0
cpu 1: core_id 0
//socket 1
cpu 2: core_id 0
cpu 3: core_id 0
If for_each_online_cpu is used, apparently 4 logical cpus will share the
same percore. This is wrong.
If topology_thread_cpumask is used, then cpu0 and cpu1 share one percore
and cpu2 and cpu3 share another percore. This is what we want.
Lin Ming
>
> -Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists