lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 7 Jul 2008 21:52:59 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	Jeremy Fitzhardinge <jeremy@...p.org>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Christoph Lameter <clameter@....com>,
	Petr Tesarik <ptesarik@...e.cz>, Ingo Molnar <mingo@...e.hu>,
	linux-kernel@...r.kernel.org
Subject: Re: Spinlocks: Factor our GENERIC_LOCKBREAK in order to avoid spin with irqs disable

On Monday 07 July 2008 21:50, Nick Piggin wrote:
> On Thursday 26 June 2008 12:51, Jeremy Fitzhardinge wrote:

> > Thomas Friebel presented results at the Xen Summit this week showing
> > that ticket locks are an absolute disaster for scalability in a virtual
> > environment, for a similar reason.  It's a bit irritating if the lock
> > holder vcpu gets preempted by the hypervisor, but its much worse when
> > they release the lock: unless the vcpu scheduler gives a cpu to the vcpu
> > with the next ticket, it can waste up to N timeslices spinning.
>
> I didn't realise it is good practice to run multiple "virtual CPUs"
> of the same guest on a single physical CPU on the host...
>
> > I'm experimenting with adding pvops hook to allow you to put in new
> > spinlock implementations on the fly.  If nothing else, it will be useful
> > for experimenting with different algorithms.  But it definitely seems
> > like the old unfair lock algorithm played much better with a virtual
> > environment, because the next cpu to get the lock is the next one the
> > scheduler gives time, rather than dictating an order - and the scheduler
> > should mitigate the unfairness that ticket locks were designed to solve.
>
> ... if it is good practice, then, virtualizing spinlocks I guess is
> reasonable. If not, then "don't do that". Considering that probably
> many bare metal systems will run pv kernels, every little cost adds
> up.

Although, you wouldn't need to oversubscribe physical CPUs to hit
suboptimal behaviour.

Basically, I just ask for performance improvement to be measured
with some "realistic" configuration, then it should be easier to
justify.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ