lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 7 Jul 2008 21:50:39 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	Jeremy Fitzhardinge <jeremy@...p.org>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Christoph Lameter <clameter@....com>,
	Petr Tesarik <ptesarik@...e.cz>, Ingo Molnar <mingo@...e.hu>,
	linux-kernel@...r.kernel.org
Subject: Re: Spinlocks: Factor our GENERIC_LOCKBREAK in order to avoid spin with irqs disable

On Thursday 26 June 2008 12:51, Jeremy Fitzhardinge wrote:
> Peter Zijlstra wrote:
> > On Mon, 2008-06-23 at 13:45 -0700, Christoph Lameter wrote:
> >> On Mon, 23 Jun 2008, Peter Zijlstra wrote:
> >>>> It is good that the locks are build with _trylock and _can_lock
> >>>> because then we can reenable interrupts while spinning.
> >>>
> >>> Well, good and bad, the turn side is that fairness schemes like ticket
> >>> locks are utterly defeated.
> >>
> >> True. But maybe we can make these fairness schemes more generic so that
> >> they can go into core code?
> >
> > The trouble with ticket locks is that they can't handle waiters going
> > away - or in this case getting preempted by irq handlers. The one who
> > took the ticket must pass it on, so if you're preempted it just sits
> > there being idle, until you get back to deal with the lock.
> >
> > But yeah, perhaps another fairness scheme might work in the generic
> > code..
>
> Thomas Friebel presented results at the Xen Summit this week showing
> that ticket locks are an absolute disaster for scalability in a virtual
> environment, for a similar reason.  It's a bit irritating if the lock
> holder vcpu gets preempted by the hypervisor, but its much worse when
> they release the lock: unless the vcpu scheduler gives a cpu to the vcpu
> with the next ticket, it can waste up to N timeslices spinning.

I didn't realise it is good practice to run multiple "virtual CPUs"
of the same guest on a single physical CPU on the host...


> I'm experimenting with adding pvops hook to allow you to put in new
> spinlock implementations on the fly.  If nothing else, it will be useful
> for experimenting with different algorithms.  But it definitely seems
> like the old unfair lock algorithm played much better with a virtual
> environment, because the next cpu to get the lock is the next one the
> scheduler gives time, rather than dictating an order - and the scheduler
> should mitigate the unfairness that ticket locks were designed to solve.

... if it is good practice, then, virtualizing spinlocks I guess is
reasonable. If not, then "don't do that". Considering that probably
many bare metal systems will run pv kernels, every little cost adds
up.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ