[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+CyBKp_8CwNKzaMm0J0m476X52UsqmH7NvDsjQnFa8nqzg@mail.gmail.com>
Date: Thu, 22 Jun 2017 19:50:02 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: root <yang.zhang.wz@...il.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>,
Paolo Bonzini <pbonzini@...hat.com>,
"the arch/x86 maintainers" <x86@...nel.org>,
Jonathan Corbet <corbet@....net>, tony.luck@...el.com,
Borislav Petkov <bp@...en8.de>,
Peter Zijlstra <peterz@...radead.org>, mchehab@...nel.org,
Andrew Morton <akpm@...ux-foundation.org>, krzk@...nel.org,
jpoimboe@...hat.com, Andy Lutomirski <luto@...nel.org>,
Christian Borntraeger <borntraeger@...ibm.com>,
thgarnie@...gle.com, rgerst@...il.com, minipli@...glemail.com,
douly.fnst@...fujitsu.com, nicstange@...il.com,
Frederic Weisbecker <fweisbec@...il.com>, dvlasenk@...hat.com,
Daniel Bristot de Oliveira <bristot@...hat.com>,
yamada.masahiro@...ionext.com, mika.westerberg@...ux.intel.com,
Chen Yu <yu.c.chen@...el.com>, aaron.lu@...el.com,
Steven Rostedt <rostedt@...dmis.org>, me@...ehuey.com,
Len Brown <len.brown@...el.com>,
Prarit Bhargava <prarit@...hat.com>,
hidehiro.kawai.ez@...achi.com, fengtiantian@...wei.com,
pmladek@...e.com, jeyu@...hat.com, Larry.Finger@...inger.net,
zijun_hu@....com, luisbg@....samsung.com, johannes.berg@...el.com,
niklas.soderlund+renesas@...natech.se, zlpnobody@...il.com,
adobriyan@...il.com, fgao@...ai8.com, ebiederm@...ssion.com,
subashab@...eaurora.org, arnd@...db.de,
Matt Fleming <matt@...eblueprint.co.uk>,
Mel Gorman <mgorman@...hsingularity.net>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
linux-doc@...r.kernel.org, linux-edac@...r.kernel.org,
kvm <kvm@...r.kernel.org>
Subject: Re: [PATCH 0/2] x86/idle: add halt poll support
2017-06-22 19:22 GMT+08:00 root <yang.zhang.wz@...il.com>:
> From: Yang Zhang <yang.zhang.wz@...il.com>
>
> Some latency-intensive workload will see obviously performance
> drop when running inside VM. The main reason is that the overhead
> is amplified when running inside VM. The most cost i have seen is
> inside idle path.
> This patch introduces a new mechanism to poll for a while before
> entering idle state. If schedule is needed during poll, then we
> don't need to goes through the heavy overhead path.
>
> Here is the data i get when running benchmark contextswitch
> (https://github.com/tsuna/contextswitch)
> before patch:
> 2000000 process context switches in 4822613801ns (2411.3ns/ctxsw)
> after patch:
> 2000000 process context switches in 3584098241ns (1792.0ns/ctxsw)
If you test this after disabling the adaptive halt-polling in kvm?
What's the performance data of w/ this patchset and w/o the adaptive
halt-polling in kvm, and w/o this patchset and w/ the adaptive
halt-polling in kvm? In addition, both linux and windows guests can
get benefit as we have already done this in kvm.
Regards,
Wanpeng Li
> Yang Zhang (2):
> x86/idle: add halt poll for halt idle
> x86/idle: use dynamic halt poll
>
> Documentation/sysctl/kernel.txt | 24 ++++++++++
> arch/x86/include/asm/processor.h | 6 +++
> arch/x86/kernel/apic/apic.c | 6 +++
> arch/x86/kernel/apic/vector.c | 1 +
> arch/x86/kernel/cpu/mcheck/mce_amd.c | 2 +
> arch/x86/kernel/cpu/mcheck/therm_throt.c | 2 +
> arch/x86/kernel/cpu/mcheck/threshold.c | 2 +
> arch/x86/kernel/irq.c | 5 ++
> arch/x86/kernel/irq_work.c | 2 +
> arch/x86/kernel/process.c | 80 ++++++++++++++++++++++++++++++++
> arch/x86/kernel/smp.c | 6 +++
> include/linux/kernel.h | 5 ++
> kernel/sched/idle.c | 3 ++
> kernel/sysctl.c | 23 +++++++++
> 14 files changed, 167 insertions(+)
>
> --
> 1.8.3.1
>
Powered by blists - more mailing lists