lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANRm+Cxuhr0_iW3_5vrQw01GU=zaTQSBv5J9CJotRsDep6DsXA@mail.gmail.com>
Date:   Wed, 15 Nov 2017 20:20:03 +0800
From:   Wanpeng Li <kernellwp@...il.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        kvm <kvm@...r.kernel.org>,
        "Radim Kr??m????" <rkrcmar@...hat.com>,
        Wanpeng Li <wanpeng.li@...mail.com>,
        Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [PATCH v2 2/4] KVM: Add paravirt remote TLB flush

2017-11-15 17:54 GMT+08:00 Peter Zijlstra <peterz@...radead.org>:
> On Wed, Nov 15, 2017 at 04:43:32PM +0800, Wanpeng Li wrote:
>> Hi Peterz,
>>
>> I found big performance difference as I discuss with you several days ago.
>>
>> ebizzy -M
>>                         vanilla    static/local cpumask     per-cpu cpumask
>>  8 vCPUs       10152            10083                          10117
>> 16 vCPUs        1224              4866                          10008
>> 24 vCPUs        1109              3871                            9928
>> 32 vCPUs        1025              3375                            9811
>>
>> In addition, I can observe ~50% perf top time is occupied by
>> smp_call_function_many(), ~30% perf top time is occupied by
>> call_function_interrupt() in the guest when running ebizzy for
>> static/local cpumask variable. However, I almost can't observe these
>> IPI stuffs after changing to per-cpu variable. Any opinions?
>
> That doesn't really make sense.. :/
>
> So a single static variable is broken (multiple CPUs can call
> flush_tlb_others() concurrently and overwrite each others masks). But I
> don't see why a per-cpu variable would be much slower than an on-stack
> variable.

The score of ebizzy, bigger is better, so per-cpu variable 2~3 times
better than on-stack. Actually I find what happens here. :)

+ for_each_possible_cpu(cpu) {
+     zalloc_cpumask_var_node(per_cpu_ptr(&__pv_tlb_mask, cpu),
+         GFP_KERNEL, cpu_to_node(cpu));
+ }

This zalloc_cpumask_var_node() returns NULL and fails to alloc per-cpu
memory. There is a check in my kvm_flush_tlb_others():

+ if (unlikely(!flushmask))
+     return;

So the kvm_flush_tlb_others() skips all the tlbs shutdown, I think
that's the reason why the score of overcommit is as high as
non-overcommit, in addition, it also explains why I can't observe IPI
related functions by perf top.

Regards,
Wanpeng Li

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ