lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 13 Jul 2017 19:49:57 +0800
From:   Yang Zhang <yang.zhang.wz@...il.com>
To:     Radim Krčmář <rkrcmar@...hat.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Wanpeng Li <kernellwp@...il.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H. Peter Anvin" <hpa@...or.com>,
        the arch/x86 maintainers <x86@...nel.org>,
        Jonathan Corbet <corbet@....net>, tony.luck@...el.com,
        Borislav Petkov <bp@...en8.de>,
        Peter Zijlstra <peterz@...radead.org>, mchehab@...nel.org,
        Andrew Morton <akpm@...ux-foundation.org>, krzk@...nel.org,
        jpoimboe@...hat.com, Andy Lutomirski <luto@...nel.org>,
        Christian Borntraeger <borntraeger@...ibm.com>,
        Thomas Garnier <thgarnie@...gle.com>,
        Robert Gerst <rgerst@...il.com>,
        Mathias Krause <minipli@...glemail.com>,
        douly.fnst@...fujitsu.com, Nicolai Stange <nicstange@...il.com>,
        Frederic Weisbecker <fweisbec@...il.com>, dvlasenk@...hat.com,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        yamada.masahiro@...ionext.com, mika.westerberg@...ux.intel.com,
        Chen Yu <yu.c.chen@...el.com>, aaron.lu@...el.com,
        Steven Rostedt <rostedt@...dmis.org>,
        Kyle Huey <me@...ehuey.com>, Len Brown <len.brown@...el.com>,
        Prarit Bhargava <prarit@...hat.com>,
        hidehiro.kawai.ez@...achi.com, fengtiantian@...wei.com,
        pmladek@...e.com, jeyu@...hat.com, Larry.Finger@...inger.net,
        zijun_hu@....com, luisbg@....samsung.com, johannes.berg@...el.com,
        niklas.soderlund+renesas@...natech.se, zlpnobody@...il.com,
        Alexey Dobriyan <adobriyan@...il.com>, fgao@...ai8.com,
        ebiederm@...ssion.com,
        Subash Abhinov Kasiviswanathan <subashab@...eaurora.org>,
        Arnd Bergmann <arnd@...db.de>,
        Matt Fleming <matt@...eblueprint.co.uk>,
        Mel Gorman <mgorman@...hsingularity.net>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        linux-doc@...r.kernel.org, linux-edac@...r.kernel.org,
        kvm <kvm@...r.kernel.org>
Subject: Re: [PATCH 2/2] x86/idle: use dynamic halt poll

On 2017/7/4 22:13, Radim Krčmář wrote:
> 2017-07-03 17:28+0800, Yang Zhang:
>> The background is that we(Alibaba Cloud) do get more and more complaints
>> from our customers in both KVM and Xen compare to bare-mental.After
>> investigations, the root cause is known to us: big cost in message passing
>> workload(David show it in KVM forum 2015)
>>
>> A typical message workload like below:
>> vcpu 0                             vcpu 1
>> 1. send ipi                     2.  doing hlt
>> 3. go into idle                 4.  receive ipi and wake up from hlt
>> 5. write APIC time twice        6.  write APIC time twice to
>>    to stop sched timer              reprogram sched timer
>
> One write is enough to disable/re-enable the APIC timer -- why does
> Linux use two?

One is to remove the timer and another one is to reprogram the timer. 
Normally, only one write to remove the timer.But in some cases, it will 
reprogram it.

>
>> 7. doing hlt                    8.  handle task and send ipi to
>>                                     vcpu 0
>> 9. same to 4.                   10. same to 3
>>
>> One transaction will introduce about 12 vmexits(2 hlt and 10 msr write). The
>> cost of such vmexits will degrades performance severely.
>
> Yeah, sounds like too much ... I understood that there are
>
>   IPI from 1 to 2
>   4 * APIC timer
>   IPI from 2 to 1
>
> which adds to 6 MSR writes -- what are the other 4?

In the worst case, each timer will touch APIC timer twice.So it will add 
additional 4 msr writse. But this is  not always true.

>
>>                                                          Linux kernel
>> already provide idle=poll to mitigate the trend. But it only eliminates the
>> IPI and hlt vmexit. It has nothing to do with start/stop sched timer. A
>> compromise would be to turn off NOHZ kernel, but it is not the default
>> config for new distributions. Same for halt-poll in KVM, it only solve the
>> cost from schedule in/out in host and can not help such workload much.
>>
>> The purpose of this patch we want to improve current idle=poll mechanism to
>
> Please aim to allow MWAIT instead of idle=poll -- MWAIT doesn't slow
> down the sibling hyperthread.  MWAIT solves the IPI problem, but doesn't
> get rid of the timer one.

Yes, i can try it. But MWAIT will not yield CPU, it only helps the 
sibling hyperthread as you mentioned.

>
>> use dynamic polling and do poll before touch sched timer. It should not be a
>> virtualization specific feature but seems bare mental have low cost to
>> access the MSR. So i want to only enable it in VM. Though the idea below the
>> patch may not so perfect to fit all conditions, it looks no worse than now.
>
> It adds code to hot-paths (interrupt handlers) while trying to optimize
> an idle-path, which is suspicious.
>
>> How about we keep current implementation and i integrate the patch to
>> para-virtualize part as Paolo suggested? We can continue discuss it and i
>> will continue to refine it if anyone has better suggestions?
>
> I think there is a nicer solution to avoid the expensive timer rewrite:
> Linux uses one-shot APIC timers and getting the timer interrupt is about
> as expensive as programming the timer, so the guest can keep the timer
> armed, but not re-arm it after the expiration if the CPU is idle.
>
> This should also mitigate the problem with short idle periods, but the
> optimized window is anywhere between 0 to 1ms.
>
> Do you see disadvantages of this combined with MWAIT?
>
> Thanks.
>


-- 
Yang
Alibaba Cloud Computing

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ