lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 17 Jul 2017 17:26:13 +0800
From:   Yang Zhang <yang.zhang.wz@...il.com>
To:     Alexander Graf <agraf@...e.de>,
        Radim Krčmář <rkrcmar@...hat.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Wanpeng Li <kernellwp@...il.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H. Peter Anvin" <hpa@...or.com>,
        the arch/x86 maintainers <x86@...nel.org>,
        Jonathan Corbet <corbet@....net>, tony.luck@...el.com,
        Borislav Petkov <bp@...en8.de>,
        Peter Zijlstra <peterz@...radead.org>, mchehab@...nel.org,
        Andrew Morton <akpm@...ux-foundation.org>, krzk@...nel.org,
        jpoimboe@...hat.com, Andy Lutomirski <luto@...nel.org>,
        Christian Borntraeger <borntraeger@...ibm.com>,
        Thomas Garnier <thgarnie@...gle.com>,
        Robert Gerst <rgerst@...il.com>,
        Mathias Krause <minipli@...glemail.com>,
        douly.fnst@...fujitsu.com, Nicolai Stange <nicstange@...il.com>,
        Frederic Weisbecker <fweisbec@...il.com>, dvlasenk@...hat.com,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        yamada.masahiro@...ionext.com, mika.westerberg@...ux.intel.com,
        Chen Yu <yu.c.chen@...el.com>, aaron.lu@...el.com,
        Steven Rostedt <rostedt@...dmis.org>,
        Kyle Huey <me@...ehuey.com>, Len Brown <len.brown@...el.com>,
        Prarit Bhargava <prarit@...hat.com>,
        hidehiro.kawai.ez@...achi.com, fengtiantian@...wei.com,
        pmladek@...e.com, jeyu@...hat.com, Larry.Finger@...inger.net,
        zijun_hu@....com, luisbg@....samsung.com, johannes.berg@...el.com,
        niklas.soderlund+renesas@...natech.se, zlpnobody@...il.com,
        Alexey Dobriyan <adobriyan@...il.com>, fgao@...ai8.com,
        ebiederm@...ssion.com,
        Subash Abhinov Kasiviswanathan <subashab@...eaurora.org>,
        Arnd Bergmann <arnd@...db.de>,
        Matt Fleming <matt@...eblueprint.co.uk>,
        Mel Gorman <mgorman@...hsingularity.net>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        linux-doc@...r.kernel.org, linux-edac@...r.kernel.org,
        kvm <kvm@...r.kernel.org>
Subject: Re: [PATCH 2/2] x86/idle: use dynamic halt poll

On 2017/7/14 17:37, Alexander Graf wrote:
>
>
> On 13.07.17 13:49, Yang Zhang wrote:
>> On 2017/7/4 22:13, Radim Krčmář wrote:
>>> 2017-07-03 17:28+0800, Yang Zhang:
>>>> The background is that we(Alibaba Cloud) do get more and more
>>>> complaints
>>>> from our customers in both KVM and Xen compare to bare-mental.After
>>>> investigations, the root cause is known to us: big cost in message
>>>> passing
>>>> workload(David show it in KVM forum 2015)
>>>>
>>>> A typical message workload like below:
>>>> vcpu 0                             vcpu 1
>>>> 1. send ipi                     2.  doing hlt
>>>> 3. go into idle                 4.  receive ipi and wake up from hlt
>>>> 5. write APIC time twice        6.  write APIC time twice to
>>>>    to stop sched timer              reprogram sched timer
>>>
>>> One write is enough to disable/re-enable the APIC timer -- why does
>>> Linux use two?
>>
>> One is to remove the timer and another one is to reprogram the timer.
>> Normally, only one write to remove the timer.But in some cases, it
>> will reprogram it.
>>
>>>
>>>> 7. doing hlt                    8.  handle task and send ipi to
>>>>                                     vcpu 0
>>>> 9. same to 4.                   10. same to 3
>>>>
>>>> One transaction will introduce about 12 vmexits(2 hlt and 10 msr
>>>> write). The
>>>> cost of such vmexits will degrades performance severely.
>>>
>>> Yeah, sounds like too much ... I understood that there are
>>>
>>>   IPI from 1 to 2
>>>   4 * APIC timer
>>>   IPI from 2 to 1
>>>
>>> which adds to 6 MSR writes -- what are the other 4?
>>
>> In the worst case, each timer will touch APIC timer twice.So it will
>> add additional 4 msr writse. But this is  not always true.
>>
>>>
>>>>                                                          Linux kernel
>>>> already provide idle=poll to mitigate the trend. But it only
>>>> eliminates the
>>>> IPI and hlt vmexit. It has nothing to do with start/stop sched timer. A
>>>> compromise would be to turn off NOHZ kernel, but it is not the default
>>>> config for new distributions. Same for halt-poll in KVM, it only
>>>> solve the
>>>> cost from schedule in/out in host and can not help such workload much.
>>>>
>>>> The purpose of this patch we want to improve current idle=poll
>>>> mechanism to
>>>
>>> Please aim to allow MWAIT instead of idle=poll -- MWAIT doesn't slow
>>> down the sibling hyperthread.  MWAIT solves the IPI problem, but doesn't
>>> get rid of the timer one.
>>
>> Yes, i can try it. But MWAIT will not yield CPU, it only helps the
>> sibling hyperthread as you mentioned.
>
> If you implement proper MWAIT emulation that conditionally gets en- or
> disabled depending on the same halt poll dynamics that we already have
> for in-host HLT handling, it will also yield the CPU.

It is hard to do . If we not intercept MWAIT instruction, there is no 
chance to wake up the CPU unless an interrupt arrived or a store to the 
address armed by MONITOR which is the same with idle=polling.

>
> As for the timer - are you sure the problem is really the overhead of
> the timer configuration, not the latency that it takes to actually fire
> the guest timer?

No, the main cost is introduced by vmexit, includes IPIs, Timer program, 
HLT. David detailed it in KVM forum, you can search "Message Passing 
Workloads in KVM" in google and the first link give the whole analysis 
of the problem.

>
> One major problem I see is that we configure the host hrtimer to fire at
> the point in time when the guest wants to see a timer event. But in a
> virtual environment, the point in time when we have to start switching
> to the VM really should be a bit *before* the guest wants to be woken
> up, as it takes quite some time to switch back into the VM context.
>
>
> Alex


-- 
Yang
Alibaba Cloud Computing

Powered by blists - more mailing lists