lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 1 Jun 2010 18:38:07 +0200
From:	Andi Kleen <andi@...stfloor.org>
To:	Gleb Natapov <gleb@...hat.com>
Cc:	Andi Kleen <andi@...stfloor.org>, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org, avi@...hat.com, hpa@...or.com, mingo@...e.hu,
	npiggin@...e.de, tglx@...utronix.de, mtosatti@...hat.com
Subject: Re: [PATCH] use unfair spinlock when running on hypervisor.

On Tue, Jun 01, 2010 at 07:24:14PM +0300, Gleb Natapov wrote:
> On Tue, Jun 01, 2010 at 05:53:09PM +0200, Andi Kleen wrote:
> > Gleb Natapov <gleb@...hat.com> writes:
> > >
> > > The patch below allows to patch ticket spinlock code to behave similar to
> > > old unfair spinlock when hypervisor is detected. After patching unlocked
> > 
> > The question is what happens when you have a system with unfair
> > memory and you run the hypervisor on that. There it could be much worse.
> > 
> How much worse performance hit could be?

It depends on the workload. Overall it means that a contended
lock can have much higher latencies.

If you want to study some examples see the locking problems the
RT people have with their heavy weight mutex-spinlocks.

But the main problem is that in the worst case you 
can see extremly long stalls (upto a second has been observed),
which then turns in a correctness issue.
> 
> > Your new code would starve again, right?
> > 
> Yes, of course it may starve with unfair spinlock. Since vcpus are not
> always running there is much smaller chance then vcpu on remote memory
> node will starve forever. Old kernels with unfair spinlocks are running
> fine in VMs on NUMA machines with various loads.

Try it on a NUMA system with unfair memory.

> > There's a reason the ticket spinlocks were added in the first place.
> > 
> I understand that reason and do not propose to get back to old spinlock
> on physical HW! But with virtualization performance hit is unbearable.

Extreme unfairness can be unbearable too.

-Andi
-- 
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ