[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1294121917.23205.123.camel@minggr.sh.intel.com>
Date: Tue, 04 Jan 2011 14:18:37 +0800
From: Lin Ming <ming.m.lin@...el.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Ingo Molnar <mingo@...e.hu>, Andi Kleen <andi@...stfloor.org>,
Stephane Eranian <eranian@...gle.com>,
"robert.richter@....com" <robert.richter@....com>,
lkml <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 5/7] perf: Optimise topology iteration
On Mon, 2011-01-03 at 19:02 +0800, Peter Zijlstra wrote:
> On Mon, 2010-12-27 at 23:38 +0800, Lin Ming wrote:
> > Currently we iterate the full machine looking for a matching core_id/nb
> > for the percore and the amd northbridge stuff , using a smaller topology
> > mask makes sense.
>
> Does topology_thread_cpumask() include offline cpus? I tried looking at
> it, but I cannot find any code clearing bits in that mask on offline.
No, it does not include offline cpus.
For x86 code, remove_siblinginfo() clears the bits.
take_cpu_down ->
__cpu_disable ->
native_cpu_disable ->
cpu_disable_common ->
remove_siblinginfo
static void remove_siblinginfo(int cpu)
{
int sibling;
struct cpuinfo_x86 *c = &cpu_data(cpu);
for_each_cpu(sibling, cpu_core_mask(cpu)) {
cpumask_clear_cpu(cpu, cpu_core_mask(sibling));
/*/
* last thread sibling in this cpu core going down
*/
if (cpumask_weight(cpu_sibling_mask(cpu)) == 1)
cpu_data(sibling).booted_cores--;
}
for_each_cpu(sibling, cpu_sibling_mask(cpu))
cpumask_clear_cpu(cpu, cpu_sibling_mask(sibling));
cpumask_clear(cpu_sibling_mask(cpu));
cpumask_clear(cpu_core_mask(cpu));
c->phys_proc_id = 0;
c->cpu_core_id = 0;
cpumask_clear_cpu(cpu, cpu_sibling_setup_mask);
}
Lin Ming
>
> > Signed-off-by: Lin Ming <ming.m.lin@...el.com>
> > ---
> > arch/x86/kernel/cpu/perf_event_amd.c | 2 +-
> > arch/x86/kernel/cpu/perf_event_intel.c | 2 +-
> > 2 files changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kernel/cpu/perf_event_amd.c b/arch/x86/kernel/cpu/perf_event_amd.c
> > index 67e2202..5a3b7b8 100644
> > --- a/arch/x86/kernel/cpu/perf_event_amd.c
> > +++ b/arch/x86/kernel/cpu/perf_event_amd.c
> > @@ -323,7 +323,7 @@ static void amd_pmu_cpu_starting(int cpu)
> > nb_id = amd_get_nb_id(cpu);
> > WARN_ON_ONCE(nb_id == BAD_APICID);
> >
> > - for_each_online_cpu(i) {
> > + for_each_cpu(i, topology_core_cpumask(cpu)) {
> > nb = per_cpu(cpu_hw_events, i).amd_nb;
> > if (WARN_ON_ONCE(!nb))
> > continue;
> > diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
> > index 354d1de..ad70c2c 100644
> > --- a/arch/x86/kernel/cpu/perf_event_intel.c
> > +++ b/arch/x86/kernel/cpu/perf_event_intel.c
> > @@ -1111,7 +1111,7 @@ static void intel_pmu_cpu_starting(int cpu)
> > if (!ht_enabled(cpu))
> > return;
> >
> > - for_each_online_cpu(i) {
> > + for_each_cpu(i, topology_thread_cpumask(cpu)) {
> > struct intel_percore *pc = per_cpu(cpu_hw_events, i).per_core;
> >
> > if (pc && pc->core_id == core_id) {
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists