[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1275540711.29413.63.camel@edumazet-laptop>
Date: Thu, 03 Jun 2010 06:51:51 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: vatsa@...ibm.com
Cc: Avi Kivity <avi@...hat.com>, Andi Kleen <andi@...stfloor.org>,
Gleb Natapov <gleb@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, hpa@...or.com, mingo@...e.hu, npiggin@...e.de,
tglx@...utronix.de, mtosatti@...hat.com
Subject: Re: [PATCH] use unfair spinlock when running on hypervisor.
Le jeudi 03 juin 2010 à 09:50 +0530, Srivatsa Vaddagiri a écrit :
> On Wed, Jun 02, 2010 at 12:00:27PM +0300, Avi Kivity wrote:
> >
> > There are two separate problems: the more general problem is that
> > the hypervisor can put a vcpu to sleep while holding a lock, causing
> > other vcpus to spin until the end of their time slice. This can
> > only be addressed with hypervisor help.
>
> Fyi - I have a early patch ready to address this issue. Basically I am using
> host-kernel memory (mmap'ed into guest as io-memory via ivshmem driver) to hint
> host whenever guest is in spin-lock'ed section, which is read by host scheduler
> to defer preemption.
>
> Guest side:
>
> static inline void spin_lock(spinlock_t *lock)
> {
> raw_spin_lock(&lock->rlock);
> + __get_cpu_var(gh_vcpu_ptr)->defer_preempt++;
1) __this_cpu_inc() should be faster
2) Isnt a bit late to do this increment _after_
raw_spin_lock(&lock->rlock);
> }
>
> static inline void spin_unlock(spinlock_t *lock)
> {
> + __get_cpu_var(gh_vcpu_ptr)->defer_preempt--;
> raw_spin_unlock(&lock->rlock);
> }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists