[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110121140208.GA13609@linux.vnet.ibm.com>
Date: Fri, 21 Jan 2011 19:32:08 +0530
From: Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
To: Jeremy Fitzhardinge <jeremy@...p.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Nick Piggin <npiggin@...nel.dk>,
Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
Américo Wang <xiyou.wangcong@...il.com>,
Eric Dumazet <dada1@...mosbay.com>,
Jan Beulich <JBeulich@...ell.com>, Avi Kivity <avi@...hat.com>,
Xen-devel <xen-devel@...ts.xensource.com>,
"H. Peter Anvin" <hpa@...or.com>,
Linux Virtualization <virtualization@...ts.linux-foundation.org>,
Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>,
kvm@...r.kernel.org, suzuki@...ibm.com
Subject: Re: [PATCH 2/3] kvm hypervisor : Add hypercalls to support
pv-ticketlock
On Thu, Jan 20, 2011 at 09:56:27AM -0800, Jeremy Fitzhardinge wrote:
> > The key here is not to
> > sleep when waiting for locks (as implemented by current patch-series, which can
> > put other VMs at an advantage by giving them more time than they are entitled
> > to)
>
> Why? If a VCPU can't make progress because its waiting for some
> resource, then why not schedule something else instead?
In the process, "something else" can get more share of cpu resource than its
entitled to and that's where I was bit concerned. I guess one could
employ hard-limits to cap "something else's" bandwidth where it is of real
concern (like clouds).
> Presumably when
> the VCPU does become runnable, the scheduler will credit its previous
> blocked state and let it run in preference to something else.
which may not be sufficient for it to gain back bandwidth lost while blocked
(speaking of mainline scheduler atleast).
> > Is there a way we can dynamically expand the size of lock only upon contention
> > to include additional information like owning vcpu? Have the lock point to a
> > per-cpu area upon contention where additional details can be stored perhaps?
>
> As soon as you add a pointer to the lock, you're increasing its size.
I didn't really mean to expand size statically. Rather have some bits of the
lock word store pointer to a per-cpu area when there is contention (somewhat
similar to how bits of rt_mutex.owner are used). I haven't thought thr' this in
detail to see if that is possible though.
- vatsa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists