lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 22 Jan 2011 11:44:17 +0530
From:	Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
To:	Rik van Riel <riel@...hat.com>
Cc:	Jeremy Fitzhardinge <jeremy@...p.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Nick Piggin <npiggin@...nel.dk>,
	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
	Américo Wang <xiyou.wangcong@...il.com>,
	Eric Dumazet <dada1@...mosbay.com>,
	Jan Beulich <JBeulich@...ell.com>, Avi Kivity <avi@...hat.com>,
	Xen-devel <xen-devel@...ts.xensource.com>,
	"H. Peter Anvin" <hpa@...or.com>,
	Linux Virtualization <virtualization@...ts.linux-foundation.org>,
	Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>,
	kvm@...r.kernel.org, suzuki@...ibm.com
Subject: Re: [PATCH 2/3] kvm hypervisor : Add hypercalls to support
 pv-ticketlock

On Fri, Jan 21, 2011 at 09:48:29AM -0500, Rik van Riel wrote:
> >>Why?  If a VCPU can't make progress because its waiting for some
> >>resource, then why not schedule something else instead?
> >
> >In the process, "something else" can get more share of cpu resource than its
> >entitled to and that's where I was bit concerned. I guess one could
> >employ hard-limits to cap "something else's" bandwidth where it is of real
> >concern (like clouds).
> 
> I'd like to think I fixed those things in my yield_task_fair +
> yield_to + kvm_vcpu_on_spin patch series from yesterday.

Speaking of the spinlock-in-virtualized-environment problem as whole, IMHO
I don't think that kvm_vcpu_on_spin + yield changes will provide the best
results, especially where ticketlocks are involved and they are paravirtualized 
in a manner being discussed in this thread. An important focus of pv-ticketlocks
is to reduce the lock _acquisition_ time by ensuring that the next-in-line 
vcpu gets to run asap when a ticket lock is released. With the way 
kvm_vcpu_on_spin+yield_to is implemented, I don't see how we can provide the 
best lock acquisition times for threads. It would be nice though to compare 
the two approaches (kvm_vcpu_on_spin optimization and the pv-ticketlock scheme) 
to get some real-world numbers. I unfortunately don't have access to a PLE
capable hardware which is required to test your kvm_vcpu_on_spin changes?

Also it may be possible for the pv-ticketlocks to track owning vcpu and make use
of a yield-to interface as further optimization to avoid the 
"others-get-more-time" problem, but Peterz rightly pointed that PI would be a 
better solution there than yield-to. So overall IMO kvm_vcpu_on_spin+yield_to
could be the best solution for unmodified guests, while paravirtualized
ticketlocks + some sort of PI would be a better solution where we have the
luxury of modifying guest sources!

- vatsa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ