lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 3 Jun 2010 20:38:55 +1000
From:	Nick Piggin <npiggin@...e.de>
To:	Srivatsa Vaddagiri <vatsa@...ibm.com>
Cc:	Avi Kivity <avi@...hat.com>, Andi Kleen <andi@...stfloor.org>,
	Gleb Natapov <gleb@...hat.com>, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org, hpa@...or.com, mingo@...e.hu,
	tglx@...utronix.de, mtosatti@...hat.com
Subject: Re: [PATCH] use unfair spinlock when running on hypervisor.

On Thu, Jun 03, 2010 at 09:50:51AM +0530, Srivatsa Vaddagiri wrote:
> On Wed, Jun 02, 2010 at 12:00:27PM +0300, Avi Kivity wrote:
> > 
> > There are two separate problems: the more general problem is that
> > the hypervisor can put a vcpu to sleep while holding a lock, causing
> > other vcpus to spin until the end of their time slice.  This can
> > only be addressed with hypervisor help.
> 
> Fyi - I have a early patch ready to address this issue. Basically I am using
> host-kernel memory (mmap'ed into guest as io-memory via ivshmem driver) to hint 
> host whenever guest is in spin-lock'ed section, which is read by host scheduler 
> to defer preemption.
> 
> Guest side:
> 
> static inline void spin_lock(spinlock_t *lock)
> {
> 	raw_spin_lock(&lock->rlock);
> +       __get_cpu_var(gh_vcpu_ptr)->defer_preempt++;
> }
> 
> static inline void spin_unlock(spinlock_t *lock)
> {
> +	__get_cpu_var(gh_vcpu_ptr)->defer_preempt--;
>         raw_spin_unlock(&lock->rlock);
> }
> 
> [similar changes to other spinlock variants]

Great, this is a nice way to improve it.

You might want to consider playing with first taking a ticket, and
then if we fail to acquire the lock immediately, then increment
defer_preempt before we start spinning.

The downside of this would be if we waste all our slice on spinning
and then preempted in the critical section. But with ticket locks
you can easily see how many entries in the queue in front of you.
So you could experiment with starting to defer preempt when we
notice we are getting toward the head of the queue.

Have you also looked at how s390 checks if the owning vcpu is running
and if so it spins, if not yields to the hypervisor. Something like
turning it into an adaptive lock. This could be applicable as well.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ