lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 22 Feb 2011 16:11:31 +0200
From:	Avi Kivity <avi@...hat.com>
To:	"Roedel, Joerg" <Joerg.Roedel@....com>
CC:	Marcelo Tosatti <mtosatti@...hat.com>,
	"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Zachary Amsden <zamsden@...hat.com>
Subject: Re: [PATCH 0/6] KVM support for TSC scaling

On 02/22/2011 01:11 PM, Roedel, Joerg wrote:
> >
> >  Ok, so your scenario is
> >
> >  - boot on host H1
> >  - no intervening migrations
> >  - migrate to host Hnew
> >  - all succeeding migrations are only to new hosts or back to H1
> >
> >  This is somewhat artificial, and not very different from an all-new cluster.
>
> This is at least the scenario where the new hardware feature will make
> sense. Its clear that if you migrate a guest between hosts without
> tsc-scaling will make the tsc appear unstable for the guest. This is
> basically the same situation as we have today.
> In fact, for older hosts the feature can be emulated in software by
> trapping tsc accesses from the guest. Isn't this what Zachary has been
> working on?

Yes.  It's of dubious value though, you get a stable tsc but it's 
incredibly slow.

>   During my implementation I understood tsc-scaling as a
> hardware supported way to do this. And thats the reason I implemented it
> the way it is.

Right.  The only question is what the added guest switch cost.  If it's 
expensive (say, >= 100 cycles) then we need a mode where we can drop 
this cost by applying the same multiplier to all guests and the host 
(can be done as an add-on optimization patch).  If however we end up 
always recommending that all hosts use the same virtual tsc rate, why 
should we support individual rates for guests?

It does make sense from a generality point of view, we provide 
mechanism, not policy, just make sure that the policies we like are 
optimized as far as they can go.

> >  [the whole thing is kind of sad; we went through a huge effort to make
> >  clocks work on virtual machines in spite of the tsc issues; then we have
> >  a hardware solution, but can't use it because of old hardware.  Same
> >  thing happens with the effort put into shadow in the pre-npt days]
>
> The shadow code has a revivial as it is required for emulating
> nested-npt and nested-ept, so the effort still has value :)

Yes.  Some of it though is unused (unsync pages).  And it's hard for me 
to see nested svm itself used in production due to the huge performance 
hit for I/O.  Maybe an emulated iommu (so we can do virtio device 
assignment, or even real device assignment all the way from the host) 
will help, or even more hardware support a la s390.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ