lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87zi399xih.fsf@vitty.brq.redhat.com>
Date:   Thu, 15 Mar 2018 16:19:50 +0100
From:   Vitaly Kuznetsov <vkuznets@...hat.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     kvm@...r.kernel.org, x86@...nel.org,
        Radim Krčmář <rkrcmar@...hat.com>,
        "K. Y. Srinivasan" <kys@...rosoft.com>,
        Haiyang Zhang <haiyangz@...rosoft.com>,
        Stephen Hemminger <sthemmin@...rosoft.com>,
        "Michael Kelley \(EOSG\)" <Michael.H.Kelley@...rosoft.com>,
        Mohammed Gamal <mmorsy@...hat.com>,
        Cathy Avery <cavery@...hat.com>, Bandan Das <bsd@...hat.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 7/7] x86/kvm: use Enlightened VMCS when running on Hyper-V

Paolo Bonzini <pbonzini@...hat.com> writes:

> On 09/03/2018 15:02, Vitaly Kuznetsov wrote:
>> Enlightened VMCS is just a structure in memory, the main benefit
>> besides avoiding somewhat slower VMREAD/VMWRITE is using clean field
>> mask: we tell the underlying hypervisor which fields were modified
>> since VMEXIT so there's no need to inspect them all.
>> 
>> Tight CPUID loop test shows significant speedup:
>> Before: 18890 cycles
>> After: 8304 cycles
>> 
>> Static key is being used to avoid performance penalty for non-Hyper-V
>> deployments. Tests show we add around 3 (three) CPU cycles on each
>> VMEXIT (1077.5 cycles before, 1080.7 cycles after for the same CPUID
>> loop on bare metal). We can probably avoid one test/jmp in vmx_vcpu_run()
>> but I don't see a clean way to use static key in assembly.
>
> If you want to live dangerously, you can use text_poke_early to change
> the vmwrite to mov.  It's just a single instruction, so it's probably
> not too hard.

It is not:

+#if IS_ENABLED(CONFIG_HYPERV) && defined(CONFIG_X86_64)
+
+/* Luckily, both original and new instructions are of the same length */
+#define EVMCS_RSP_OPCODE_LEN 3
+static evmcs_patch_vmx_cpu_run(void)
+{
+       u8 *addr;
+       u8 opcode_old[] = {0x0f, 0x79, 0xd4}; // vmwrite rsp, rdx
+       u8 opcode_new[] = {0x48, 0x89, 0x26}; // mov rsp, (rsi)
+
+       /*
+        * What we're searching for MUST be present in vmx_cpu_run().
+        * We replace the first occurance only.
+        */
+       for (addr = (u8 *)vmx_vcpu_run; ; addr++) {
+               if (!memcmp(addr, opcode_old, EVMCS_RSP_OPCODE_LEN)) {
+                       /*
+                        * vmx_vcpu_run is not currently running on other CPUs but
+                        * using text_poke_early() would require us to do manual
+                        * RW remapping of the area.
+                        */
+                       text_poke(addr, opcode_new, EVMCS_RSP_OPCODE_LEN);
+                       break;
+               }
+       }
+}
+#endif
+

text_poke() also needs to be exported.

This works. But hell, this is a crude hack :-) Not sure if there's a
cleaner way to find what needs to be patched without something like jump
label table ...

-- 
  Vitaly

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ