lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <27cc0d6b-6bd7-fcaf-10b4-37bb566871f8@redhat.com>
Date:   Wed, 16 Oct 2019 09:07:39 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Andrea Arcangeli <aarcange@...hat.com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Sean Christopherson <sean.j.christopherson@...el.com>
Subject: Re: [PATCH 12/14] KVM: retpolines: x86: eliminate retpoline from
 vmx.c exit handlers

On 16/10/19 01:42, Andrea Arcangeli wrote:
> On Wed, Oct 16, 2019 at 12:22:31AM +0200, Paolo Bonzini wrote:
>> Oh come on.  0.9 is not 12-years old.  virtio 1.0 is 3.5 years old
>> (March 2016).  Anything older than 2017 is going to use 0.9.
> 
> Sorry if I got the date wrong, but still I don't see the point in
> optimizing for legacy virtio. I can't justify forcing everyone to
> execute that additional branch for inb/outb, in the attempt to make
> legacy virtio faster that nobody should use in combination with
> bleeding edge KVM in the host.

Yet you would add CPUID to the list even though it is not even there in
your benchmarks, and is *never* invoked in a hot path by *any* sane
program? Some OSes have never gotten virtio 1.0 drivers.  OpenBSD only
got it earlier this year.

>> Your tables give:
>>
>> 	Samples	  Samples%  Time%     Min Time  Max time       Avg time
>> HLT     101128    75.33%    99.66%    0.43us    901000.66us    310.88us
>> HLT     118474    19.11%    95.88%    0.33us    707693.05us    43.56us
>>
>> If "avg time" means the average time to serve an HLT vmexit, I don't
>> understand how you can have an average time of 0.3ms (1/3000th of a
>> second) and 100000 samples per second.  Can you explain that to me?
> 
> I described it wrong, the bpftrace record was a sleep 5, not a sleep
> 1. The pipe loop was sure a sleep 1.

It still doesn't add up.  0.3ms / 5 is 1/15000th of a second; 43us is
1/25000th of a second.  Do you have multiple vCPU perhaps?

> The issue is that in production you get a flood more of those with
> hundred of CPUs, so the exact number doesn't move the needle.
> This just needs to be frequent enough that the branch cost pay itself off,
> but the sure thing is that HLT vmexit will not go away unless you execute
> mwait in guest mode by isolating the CPU in the host.

The number of vmexits doesn't count (for HLT).  What counts is how long
they take to be serviced, and as long as it's 1us or more the
optimization is pointless.

Consider these pictures

         w/o optimization                   with optimization
         ----------------------             -------------------------
0us      vmexit                             vmexit
500ns    retpoline                          call vmexit handler directly
600ns    retpoline                          kvm_vcpu_check_block()
700ns    retpoline                          kvm_vcpu_check_block()
800ns    kvm_vcpu_check_block()             kvm_vcpu_check_block()
900ns    kvm_vcpu_check_block()             kvm_vcpu_check_block()
...
39900ns  kvm_vcpu_check_block()             kvm_vcpu_check_block()

                            <interrupt arrives>

40000ns  kvm_vcpu_check_block()             kvm_vcpu_check_block()


Unless the interrupt arrives exactly in the few nanoseconds that it
takes to execute the retpoline, a direct handling of HLT vmexits makes
*absolutely no difference*.

>> Again: what is the real workload that does thousands of CPUIDs per second?
> 
> None, but there are always background CPUID vmexits while there are
> never inb/outb vmexits.
> 
> So the cpuid retpoline removal has a slight chance to pay for the cost
> of the branch, the inb/outb retpoline removal cannot pay off the cost
> of the branch.

Please stop considering only the exact configuration of your benchmarks.
 There are known, valid configurations where outb is a very hot vmexit.

Thanks,

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ