[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100603144817.GA30321@linux.vnet.ibm.com>
Date: Thu, 3 Jun 2010 20:18:17 +0530
From: Srivatsa Vaddagiri <vatsa@...ibm.com>
To: Nick Piggin <npiggin@...e.de>
Cc: Avi Kivity <avi@...hat.com>, Andi Kleen <andi@...stfloor.org>,
Gleb Natapov <gleb@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, hpa@...or.com, mingo@...e.hu,
tglx@...utronix.de, mtosatti@...hat.com, schwidefsky@...ibm.com,
heiko.carstens@...ibm.com
Subject: Re: [PATCH] use unfair spinlock when running on hypervisor.
On Thu, Jun 03, 2010 at 11:45:00PM +1000, Nick Piggin wrote:
> > Ok got it - although that approach is not advisable in some cases for ex: when
> > the lock holder vcpu and lock acquired vcpu are scheduled on the same pcpu by
> > the hypervisor (which was experimented with in [1] where they foud a huge hit in
> > perf).
>
> Sure but if you had adaptive yielding, that solves that problem.
I guess so.
> > Oops you are right - sorry should have checked more closely earlier. Given that
> > we may not be able to always guarantee that locked critical sections will not be
> > preempted (ex: when a real-time task takes over), we will need a combination of
> > both approaches (i.e request preemption defer on lock hold path + yield on lock
> > acquire path if owner !scheduled). The advantage of former approach is that it
> > could reduce job turnaround times in most cases (as lock is available when we
> > want or we don't have to wait too long for it).
>
> Both I think would be good. It might be interesting to talk with the
> s390 guys and see if they can look at ticket locks and preempt defer
> techniques too (considering they already do the other half of the
> equation well).
Martin/Heiko,
Do you want to comment on this?
- vatsa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists