lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 17 Jul 2017 20:50:03 +0800
From:   Yang Zhang <yang.zhang.wz@...il.com>
To:     Alexander Graf <agraf@...e.de>,
        Radim Krčmář <rkrcmar@...hat.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Wanpeng Li <kernellwp@...il.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H. Peter Anvin" <hpa@...or.com>,
        the arch/x86 maintainers <x86@...nel.org>,
        Jonathan Corbet <corbet@....net>, tony.luck@...el.com,
        Borislav Petkov <bp@...en8.de>,
        Peter Zijlstra <peterz@...radead.org>, mchehab@...nel.org,
        Andrew Morton <akpm@...ux-foundation.org>, krzk@...nel.org,
        jpoimboe@...hat.com, Andy Lutomirski <luto@...nel.org>,
        Christian Borntraeger <borntraeger@...ibm.com>,
        Thomas Garnier <thgarnie@...gle.com>,
        Robert Gerst <rgerst@...il.com>,
        Mathias Krause <minipli@...glemail.com>,
        douly.fnst@...fujitsu.com, Nicolai Stange <nicstange@...il.com>,
        Frederic Weisbecker <fweisbec@...il.com>, dvlasenk@...hat.com,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        yamada.masahiro@...ionext.com, mika.westerberg@...ux.intel.com,
        Chen Yu <yu.c.chen@...el.com>, aaron.lu@...el.com,
        Steven Rostedt <rostedt@...dmis.org>,
        Kyle Huey <me@...ehuey.com>, Len Brown <len.brown@...el.com>,
        Prarit Bhargava <prarit@...hat.com>,
        hidehiro.kawai.ez@...achi.com, fengtiantian@...wei.com,
        pmladek@...e.com, jeyu@...hat.com, Larry.Finger@...inger.net,
        zijun_hu@....com, luisbg@....samsung.com, johannes.berg@...el.com,
        niklas.soderlund+renesas@...natech.se, zlpnobody@...il.com,
        Alexey Dobriyan <adobriyan@...il.com>, fgao@...ai8.com,
        ebiederm@...ssion.com,
        Subash Abhinov Kasiviswanathan <subashab@...eaurora.org>,
        Arnd Bergmann <arnd@...db.de>,
        Matt Fleming <matt@...eblueprint.co.uk>,
        Mel Gorman <mgorman@...hsingularity.net>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        linux-doc@...r.kernel.org, linux-edac@...r.kernel.org,
        kvm <kvm@...r.kernel.org>
Subject: Re: [PATCH 2/2] x86/idle: use dynamic halt poll

On 2017/7/17 17:54, Alexander Graf wrote:
>
>
> On 17.07.17 11:26, Yang Zhang wrote:
>> On 2017/7/14 17:37, Alexander Graf wrote:
>>>
>>>
>>> On 13.07.17 13:49, Yang Zhang wrote:
>>>> On 2017/7/4 22:13, Radim Krčmář wrote:
>>>>> 2017-07-03 17:28+0800, Yang Zhang:
>>>>>> The background is that we(Alibaba Cloud) do get more and more
>>>>>> complaints
>>>>>> from our customers in both KVM and Xen compare to bare-mental.After
>>>>>> investigations, the root cause is known to us: big cost in message
>>>>>> passing
>>>>>> workload(David show it in KVM forum 2015)
>>>>>>
>>>>>> A typical message workload like below:
>>>>>> vcpu 0                             vcpu 1
>>>>>> 1. send ipi                     2.  doing hlt
>>>>>> 3. go into idle                 4.  receive ipi and wake up from hlt
>>>>>> 5. write APIC time twice        6.  write APIC time twice to
>>>>>>    to stop sched timer              reprogram sched timer
>>>>>
>>>>> One write is enough to disable/re-enable the APIC timer -- why does
>>>>> Linux use two?
>>>>
>>>> One is to remove the timer and another one is to reprogram the timer.
>>>> Normally, only one write to remove the timer.But in some cases, it
>>>> will reprogram it.
>>>>
>>>>>
>>>>>> 7. doing hlt                    8.  handle task and send ipi to
>>>>>>                                     vcpu 0
>>>>>> 9. same to 4.                   10. same to 3
>>>>>>
>>>>>> One transaction will introduce about 12 vmexits(2 hlt and 10 msr
>>>>>> write). The
>>>>>> cost of such vmexits will degrades performance severely.
>>>>>
>>>>> Yeah, sounds like too much ... I understood that there are
>>>>>
>>>>>   IPI from 1 to 2
>>>>>   4 * APIC timer
>>>>>   IPI from 2 to 1
>>>>>
>>>>> which adds to 6 MSR writes -- what are the other 4?
>>>>
>>>> In the worst case, each timer will touch APIC timer twice.So it will
>>>> add additional 4 msr writse. But this is  not always true.
>>>>
>>>>>
>>>>>>                                                          Linux kernel
>>>>>> already provide idle=poll to mitigate the trend. But it only
>>>>>> eliminates the
>>>>>> IPI and hlt vmexit. It has nothing to do with start/stop sched
>>>>>> timer. A
>>>>>> compromise would be to turn off NOHZ kernel, but it is not the
>>>>>> default
>>>>>> config for new distributions. Same for halt-poll in KVM, it only
>>>>>> solve the
>>>>>> cost from schedule in/out in host and can not help such workload
>>>>>> much.
>>>>>>
>>>>>> The purpose of this patch we want to improve current idle=poll
>>>>>> mechanism to
>>>>>
>>>>> Please aim to allow MWAIT instead of idle=poll -- MWAIT doesn't slow
>>>>> down the sibling hyperthread.  MWAIT solves the IPI problem, but
>>>>> doesn't
>>>>> get rid of the timer one.
>>>>
>>>> Yes, i can try it. But MWAIT will not yield CPU, it only helps the
>>>> sibling hyperthread as you mentioned.
>>>
>>> If you implement proper MWAIT emulation that conditionally gets en- or
>>> disabled depending on the same halt poll dynamics that we already have
>>> for in-host HLT handling, it will also yield the CPU.
>>
>> It is hard to do . If we not intercept MWAIT instruction, there is no
>> chance to wake up the CPU unless an interrupt arrived or a store to
>> the address armed by MONITOR which is the same with idle=polling.
>
> Yes, but you can reconfigure the VMCS/VMCB to trap on MWAIT or not trap
> on it. That's something that idle=polling does not give you at all - a
> guest vcpu will always use 100% CPU.

There are two things we need to figure out:
1. How and when to reconfigure the VMCS? Currently, all the knowledge 
are from guest, we don't know when to reconfigure it. Also, we cannot 
prevent guest from using MWAIT in other place if it see the feature.

2. If guest execute MWAIT without trap, since there is no way to set 
timeout for it, that would be a waste of CPU too.


>
> The only really tricky part is how to limit the effect of MONITOR on
> nested page table maintenance. But if we just set the MONITOR cache size
> to 4k, well behaved guests should ideally always give us the one same
> page for wakeup - which we can then leave marked as trapping.
>
>>
>>>
>>> As for the timer - are you sure the problem is really the overhead of
>>> the timer configuration, not the latency that it takes to actually fire
>>> the guest timer?
>>
>> No, the main cost is introduced by vmexit, includes IPIs, Timer
>> program, HLT. David detailed it in KVM forum, you can search "Message
>> Passing Workloads in KVM" in google and the first link give the whole
>> analysis of the problem.
>
> During time critical message passing you want to keep both vCPUs inside
> the guest, yes. That again is something that guest exposed MWAIT would
> buy you.

I think MWAIT only helps sibling hyper-threading case. But in real 
Cloud, hyper-threading is not always turning on, i.e. most products of 
Azure and some products of Alibaba Cloud. So it shouldn't be a big problem.

>
> The problem is that overcommitting CPU is very expensive with anything
> that does not set the guests idle at all. And not everyone can afford to
> throw more CPUs at problems :).

Agree, that's the reason why we choose dynamically halt polling. But on 
other side, the cloud vendor has the knowledge to control whether turn 
on it or not. The only problem is that there is no such way for us to do 
currently.

>
>
> Alex


-- 
Yang
Alibaba Cloud Computing

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ