[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121120074445.GA14539@gmail.com>
Date: Tue, 20 Nov 2012 08:44:45 +0100
From: Ingo Molnar <mingo@...nel.org>
To: David Rientjes <rientjes@...gle.com>
Cc: Mel Gorman <mgorman@...e.de>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Paul Turner <pjt@...gle.com>,
Lee Schermerhorn <Lee.Schermerhorn@...com>,
Christoph Lameter <cl@...ux.com>,
Rik van Riel <riel@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Johannes Weiner <hannes@...xchg.org>,
Hugh Dickins <hughd@...gle.com>
Subject: Re: [PATCH 00/27] Latest numa/core release, v16
* David Rientjes <rientjes@...gle.com> wrote:
> On Tue, 20 Nov 2012, Ingo Molnar wrote:
>
> > > > numa/core at ec05a2311c35 ("Merge branch 'sched/urgent' into
> > > > sched/core") had an average throughput of 136918.34
> > > > SPECjbb2005 bops, which is a 6.3% regression.
> > >
> > > perftop during the run on numa/core at 01aa90068b12 ("sched:
> > > Use the best-buddy 'ideal cpu' in balancing decisions"):
> > >
> > > 15.99% [kernel] [k] page_fault
> > > 4.05% [kernel] [k] getnstimeofday
> > > 3.96% [kernel] [k] _raw_spin_lock
> > > 3.20% [kernel] [k] rcu_check_callbacks
> > > 2.93% [kernel] [k] generic_smp_call_function_interrupt
> > > 2.90% [kernel] [k] __do_page_fault
> > > 2.82% [kernel] [k] ktime_get
> >
> > Thanks for testing, that's very interesting - could you tell me
> > more about exactly what kind of hardware this is? I'll try to
> > find a similar system and reproduce the performance regression.
> >
>
> This happened to be an Opteron (but not 83xx series), 2.4Ghz.
Ok - roughly which family/model from /proc/cpuinfo?
> Your benchmarks were different in the number of cores but also
> in the amount of memory, do you think numa/core would regress
> because this is 32GB and not 64GB?
I'd not expect much sensitivity to RAM size.
> > (A wild guess would be an older 4x Opteron system, 83xx
> > series or so?)
>
> Just curious, how you would guess that? [...]
I'm testing numa/core on many systems and the performance
figures seemed to roughly map to that range.
> [...] Is there something about Opteron 83xx that make
> numa/core regress?
Not that I knew of - but apparently there is! I'll try to find a
system that matches yours as closely as possible and have a
look.
> > Also, the profile looks weird to me. Here is how perf top looks
> > like on my system during a similarly configured, "healthy"
> > SPECjbb run:
> >
> > 91.29% perf-6687.map [.] 0x00007fffed1e8f21
> > 4.81% libjvm.so [.] 0x00000000007004a0
> > 0.93% [vdso] [.] 0x00007ffff7ffe60c
> > 0.72% [kernel] [k] do_raw_spin_lock
> > 0.36% [kernel] [k] generic_smp_call_function_interrupt
> > 0.10% [kernel] [k] format_decode
> > 0.07% [kernel] [k] rcu_check_callbacks
> > 0.07% [kernel] [k] apic_timer_interrupt
> > 0.07% [kernel] [k] call_function_interrupt
> > 0.06% libc-2.15.so [.] __strcmp_sse42
> > 0.06% [kernel] [k] irqtime_account_irq
> > 0.06% perf [.] 0x000000000004bb7c
> > 0.05% [kernel] [k] x86_pmu_disable_all
> > 0.04% libc-2.15.so [.] __memcpy_ssse3
> > 0.04% [kernel] [k] ktime_get
> > 0.04% [kernel] [k] account_group_user_time
> > 0.03% [kernel] [k] vbin_printf
> >
> > and that is what SPECjbb does: it spends 97% of its time in Java
> > code - yet there's no Java overhead visible in your profile -
> > how is that possible? Could you try a newer perf on that box:
> >
>
> It's perf top -U, the benchmark itself was unchanged so I
> didn't think it was interesting to gather the user symbols.
> If that would be helpful, let me know!
Yeah, regular perf top output would be very helpful to get a
general sense of proportion. Thanks!
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists