lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 1 Jun 2010 19:24:14 +0300
From:	Gleb Natapov <gleb@...hat.com>
To:	Andi Kleen <andi@...stfloor.org>
Cc:	linux-kernel@...r.kernel.org, kvm@...r.kernel.org, avi@...hat.com,
	hpa@...or.com, mingo@...e.hu, npiggin@...e.de, tglx@...utronix.de,
	mtosatti@...hat.com
Subject: Re: [PATCH] use unfair spinlock when running on hypervisor.

On Tue, Jun 01, 2010 at 05:53:09PM +0200, Andi Kleen wrote:
> Gleb Natapov <gleb@...hat.com> writes:
> >
> > The patch below allows to patch ticket spinlock code to behave similar to
> > old unfair spinlock when hypervisor is detected. After patching unlocked
> 
> The question is what happens when you have a system with unfair
> memory and you run the hypervisor on that. There it could be much worse.
> 
How much worse performance hit could be?

> Your new code would starve again, right?
> 
Yes, of course it may starve with unfair spinlock. Since vcpus are not
always running there is much smaller chance then vcpu on remote memory
node will starve forever. Old kernels with unfair spinlocks are running
fine in VMs on NUMA machines with various loads.

> There's a reason the ticket spinlocks were added in the first place.
> 
I understand that reason and do not propose to get back to old spinlock
on physical HW! But with virtualization performance hit is unbearable.

--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ