lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 17 Aug 2017 18:27:31 +0300
From:   "Michael S. Tsirkin" <mst@...hat.com>
To:     Radim Krčmář <rkrcmar@...hat.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org,
        kvm@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH] kvm: x86: disable KVM_FAST_MMIO_BUS

On Thu, Aug 17, 2017 at 03:51:31PM +0200, Radim Krčmář wrote:
> 2017-08-17 01:31+0300, Michael S. Tsirkin:
> > On Wed, Aug 16, 2017 at 11:25:35PM +0200, Paolo Bonzini wrote:
> > > On 16/08/2017 21:59, Michael S. Tsirkin wrote:
> > > > On Wed, Aug 16, 2017 at 09:03:17PM +0200, Radim Krčmář wrote:
> > > >>>> how about we blacklist nested virt for this optimization?
> > > >>
> > > >> Not every hypervisor can be easily detected ...
> > > > 
> > > > Hypervisors that don't set a hypervisor bit in CPUID are violating the
> > > > spec themselves, aren't they?  Anyway, we can add a management option
> > > > for use in a nested scenario.
> > > 
> > > No, the hypervisor bit only says that CPUID leaf 0x40000000 is defined.
> > > See for example
> > > https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1009458:
> > > "Intel and AMD have also reserved CPUID leaves 0x40000000 - 0x400000FF
> > > for software use. Hypervisors can use these leaves to provide an
> > > interface to pass information from the hypervisor to the guest operating
> > > system running inside a virtual machine. The hypervisor bit indicates
> > > the presence of a hypervisor and that it is safe to test these
> > > additional software leaves".
> > 
> > Looks like it's not a bug then. Still, most hypervisors do have this
> > leaf so it's a reasonable way that will catch most issues.  We can
> > always blacklist more as they are found. Additionally let's go ahead
> > and add ability for userspace to disable fast MMIO for these
> > hypervisors we failed to detect.
> 
> In the worst case, I'd make faster mmio an opt-in unsafe feature
> regardless of what we run on.  Users that just want KVM to work get the
> default and people who care about utmost performance can jump through
> loops.

Might have been reasonable originally but you do not want to slow
down everyone and then make them jump through hoops just to get
back where they were originally.

So I think it has to be opt-out, really for nested setups
at this point.

> > > >> KVM uses standard features and SDM clearly says that the
> > > >> instruction length field is undefined.
> > > > 
> > > > True. Let's see whether intel can commit to a stronger definition.
> > > > I don't think there's any rush to make this change.
> > > 
> > > I disagree.  Relying on undefined processor features is a bad idea.
> > 
> > Maybe it was a bad idea 3 years ago, yes. In 2012 I posted "kvm_para:
> > add mmio word store hypercall" as an alternative.  Was nacked as MMIO
> > was seen as safer and better. By now many people rely on mmio being
> > fast.  Let's talk to hardware guys to define the feature before we give
> > up and spend years designing an alternative.
> 
> The change is not backward-compatible wrt. SDM, but all processors might
> actually be behaving like we want ...  (I'd assert undefined behavior
> add a vm-exit flag if I were to allow it, though.)
> 
> > > > It's just that this has been there for 3 years and people have built a
> > > > product around this.
> > > 
> > > Around 700 clock cycles?
> > > 
> > > Paolo
> > 
> > About 30% the cost of exit, isn't it?  There are definitely workloads
> > where cost of exit gates performance. We didn't work on fast mmio based
> > on theoretical assumptions. But maybe I am wrong. We'll see. Jason here
> > volunteered to test your patch and we'll see what comes out of it. If
> > I'm wrong and it's about 1%, I won't split hairs.
> 
> I'm ok with waiting for the numbers as I hope that we won't have to
> resort to adding special cases.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ