lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 3 Jun 2010 22:38:32 +1000
From:	Nick Piggin <npiggin@...e.de>
To:	Srivatsa Vaddagiri <vatsa@...ibm.com>
Cc:	Avi Kivity <avi@...hat.com>, Andi Kleen <andi@...stfloor.org>,
	Gleb Natapov <gleb@...hat.com>, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org, hpa@...or.com, mingo@...e.hu,
	tglx@...utronix.de, mtosatti@...hat.com
Subject: Re: [PATCH] use unfair spinlock when running on hypervisor.

On Thu, Jun 03, 2010 at 05:34:50PM +0530, Srivatsa Vaddagiri wrote:
> On Thu, Jun 03, 2010 at 08:38:55PM +1000, Nick Piggin wrote:
> > > Guest side:
> > > 
> > > static inline void spin_lock(spinlock_t *lock)
> > > {
> > > 	raw_spin_lock(&lock->rlock);
> > > +       __get_cpu_var(gh_vcpu_ptr)->defer_preempt++;
> > > }
> > > 
> > > static inline void spin_unlock(spinlock_t *lock)
> > > {
> > > +	__get_cpu_var(gh_vcpu_ptr)->defer_preempt--;
> > >         raw_spin_unlock(&lock->rlock);
> > > }
> > > 
> > > [similar changes to other spinlock variants]
> > 
> > Great, this is a nice way to improve it.
> > 
> > You might want to consider playing with first taking a ticket, and
> > then if we fail to acquire the lock immediately, then increment
> > defer_preempt before we start spinning.
> >
> > The downside of this would be if we waste all our slice on spinning
> > and then preempted in the critical section. But with ticket locks
> > you can easily see how many entries in the queue in front of you.
> > So you could experiment with starting to defer preempt when we
> > notice we are getting toward the head of the queue.
> 
> Mm - my goal is to avoid long spin times in the first place (because the 
> owning vcpu was descheduled at an unfortunate time i.e while it was holding a
> lock). From that sense, I am targetting preemption-defer of lock *holder*
> rather than of lock acquirer. So ideally whenever somebody tries to grab a lock,
> it should be free most of the time, it can be held only if the owner is
> currently running - which means we won't have to spin too long for the lock.

Holding a ticket in the queue is effectively the same as holding the
lock, from the pov of processes waiting behind.

The difference of course is that CPU cycles do not directly reduce
latency of ticket holders (only the owner). Spinlock critical sections
should tend to be several orders of magnitude shorter than context
switch times. So if you preempt the guy waiting at the head of the
queue, then it's almost as bad as preempting the lock holder.

 
> > Have you also looked at how s390 checks if the owning vcpu is running
> > and if so it spins, if not yields to the hypervisor. Something like
> > turning it into an adaptive lock. This could be applicable as well.
> 
> I don't think even s390 does adaptive spinlocks. Also afaik s390 zVM does gang
> scheduling of vcpus, which reduces the severity of this problem very much -
> essentially lock acquirer/holder are run simultaneously on different cpus all
> the time. Gang scheduling is on my list of things to look at much later
> (although I have been warned that its a scalablility nightmare!).

It effectively is pretty well an adaptive lock. The spinlock itself
doesn't sleep of course, but it yields to the hypervisor if the owner
has been preempted. This is pretty close to analogous with Linux
adaptive mutexes.

s390 also has the diag9c instruction which I suppose somehow boosts
priority of a preempted contended lock holder. In spite of any other
possible optimizations in their hypervisor like gang scheduling,
diag9c apparently provides quite a large improvement in some cases.

And they aren't even using ticket spinlocks!!

So I think these things are fairly important to look at.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ