[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20100603172534.GF4166@basil.fritz.box>
Date: Thu, 3 Jun 2010 19:25:34 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Nick Piggin <npiggin@...e.de>
Cc: Andi Kleen <andi@...stfloor.org>,
Srivatsa Vaddagiri <vatsa@...ibm.com>,
Avi Kivity <avi@...hat.com>, Gleb Natapov <gleb@...hat.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org, hpa@...or.com,
mingo@...e.hu, tglx@...utronix.de, mtosatti@...hat.com
Subject: Re: [PATCH] use unfair spinlock when running on hypervisor.
> That would certainly be a part of it, I'm sure they have stronger
> fairness and guarantees at the expense of some performance. We saw the
> spinlock starvation first on 8-16 core Opterons I think, wheras Altix
> had been over 1024 cores and POWER7 1024 threads now apparently without
> reported problems.
I suppose P7 handles that in the HV through the pvcall.
Altix AFAIK has special hardware for this in the interconnect,
but as individual nodes get larger and have more cores you'll start
seeing it there too.
In general we now have the problem that with increasing core counts
per socket each NUMA node can be a fairly large SMP by itself
and several of the old SMP scalability problems that were fixed
by having per node datastructures are back now.
For example this is a serious problem with the zone locks in some
workloads now on 8core+HT systems.
> So I think actively enforcing fairness at the lock level would be
> required. Something like if it is detected that a core is not making
I suppose how that exactly works is IBM's secret sauce. Anyways
as long as there are no reports I wouldn't worry about it.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists