lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53481976.3020209@zytor.com>
Date:	Fri, 11 Apr 2014 09:33:58 -0700
From:	"H. Peter Anvin" <hpa@...or.com>
To:	"Romer, Benjamin M" <Benjamin.Romer@...sys.com>
CC:	Fengguang Wu <fengguang.wu@...el.com>,
	Jet Chen <jet.chen@...el.com>,
	Paolo Bonzini <pbonzini@...hat.com>,
	Borislav Petkov <bp@...en8.de>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [visorchipset] invalid opcode: 0000 [#1] PREEMPT SMP

On 04/11/2014 06:51 AM, Romer, Benjamin M wrote:
>
>> I'm still confused where KVM comes into the picture.  Are you actually
>> using KVM (and thus talking about nested virtualization) or are you
>> using Qemu in JIT mode and running another hypervisor underneath?
> 
> The test that Fengguang used to find the problem was running the linux
> kernel directly using KVM. When the kernel was run with "-cpu Haswell,
> +smep,+smap" set, the vmcall failed with invalid op, but when the kernel
> is run with "-cpu qemu64", the vmcall causes a vmexit, as it should.

As far as I know, Fengguang's test doesn't use KVM at all, it runs Qemu
as a JIT.  Completely different thing.  In that case Qemu probably
should *not* set the hypervisor bit.  However, the only thing that the
hypervisor bit means is that you can look for specific hypervisor APIs
in CPUID level 0x40000000+.

> My point is, the vmcall was made because the hypervisor bit was set. If
> this bit had been turned off, as it would be on a real processor, the
> vmcall wouldn't have happened.

And my point is that that is a bug.  In the driver.  A very serious one.
 You cannot call VMCALL until you know *which* hypervisor API(s) you
have available, period.

>> The hypervisor bit is a complete red herring. If the guest CPU is
>> running in VT-x mode, then VMCALL should VMEXIT inside the guest
>> (invoking the guest root VT-x), 
> 
> The CPU is running in VT-X. That was my point, the kernel is running in
> the KVM guest, and KVM is setting the CPU feature bits such that bit 31
> is enabled.

Which it is because it wants to export the KVM hypercall interface.
However, keying VMCALL *only* on the HYPERVISOR bit is wrong in the extreme.

> I don't think it's a red herring because the kernel uses this bit
> elsewhere - it is reported as X86_FEATURE_HYPERVISOR in the CPU
> features, and can be checked with the cpu_has_hypervisor macro (which
> was not used by the original author of the code in the driver, but
> should have been). VMWare and KVM support in the kernel also check for
> this bit before checking their hypervisor leaves for an ID. If it's not
> properly set it affects more than just the s-Par drivers.
> 
>> but the fact still remains that you
>> should never, ever, invoke VMCALL unless you know what hypervisor you
>> have underneath.
> 
> From the standpoint of the s-Par drivers, yes, I agree (as I already
> said). However, VMCALL is not a privileged instruction, so anyone could
> use it from user space and go right past the OS straight to the
> hypervisor. IMHO, making it *lethal* to the guest is a bad idea, since
> any user could hard-stop the guest with a couple of lines of C.

Typically the hypervisor wants to generate a #UD inside of the guest for
that case.  The guest OS will intercept it and SIGILL the user space
process.

	-hpa

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ