[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fe6eeed1-4eee-eaaa-df3b-8979af8a3891@suse.com>
Date: Tue, 14 Nov 2017 08:30:40 +0100
From: Juergen Gross <jgross@...e.com>
To: Quan Xu <quan.xu0@...il.com>, Quan Xu <quan.xu03@...il.com>,
kvm@...r.kernel.org, linux-doc@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org, x86@...nel.org,
xen-devel@...ts.xenproject.org
Cc: Yang Zhang <yang.zhang.wz@...il.com>,
Alok Kataria <akataria@...are.com>,
Rusty Russell <rusty@...tcorp.com.au>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
On 14/11/17 08:02, Quan Xu wrote:
>
>
> On 2017/11/13 18:53, Juergen Gross wrote:
>> On 13/11/17 11:06, Quan Xu wrote:
>>> From: Quan Xu <quan.xu0@...il.com>
>>>
>>> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called
>>> in idle path which will poll for a while before we enter the real idle
>>> state.
>>>
>>> In virtualization, idle path includes several heavy operations
>>> includes timer access(LAPIC timer or TSC deadline timer) which will
>>> hurt performance especially for latency intensive workload like message
>>> passing task. The cost is mainly from the vmexit which is a hardware
>>> context switch between virtual machine and hypervisor. Our solution is
>>> to poll for a while and do not enter real idle path if we can get the
>>> schedule event during polling.
>>>
>>> Poll may cause the CPU waste so we adopt a smart polling mechanism to
>>> reduce the useless poll.
>>>
>>> Signed-off-by: Yang Zhang <yang.zhang.wz@...il.com>
>>> Signed-off-by: Quan Xu <quan.xu0@...il.com>
>>> Cc: Juergen Gross <jgross@...e.com>
>>> Cc: Alok Kataria <akataria@...are.com>
>>> Cc: Rusty Russell <rusty@...tcorp.com.au>
>>> Cc: Thomas Gleixner <tglx@...utronix.de>
>>> Cc: Ingo Molnar <mingo@...hat.com>
>>> Cc: "H. Peter Anvin" <hpa@...or.com>
>>> Cc: x86@...nel.org
>>> Cc: virtualization@...ts.linux-foundation.org
>>> Cc: linux-kernel@...r.kernel.org
>>> Cc: xen-devel@...ts.xenproject.org
>> Hmm, is the idle entry path really so critical to performance that a new
>> pvops function is necessary?
> Juergen, Here is the data we get when running benchmark netperf:
> 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
> 29031.6 bit/s -- 76.1 %CPU
>
> 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
> 35787.7 bit/s -- 129.4 %CPU
>
> 3. w/ kvm dynamic poll:
> 35735.6 bit/s -- 200.0 %CPU
>
> 4. w/patch and w/ kvm dynamic poll:
> 42225.3 bit/s -- 198.7 %CPU
>
> 5. idle=poll
> 37081.7 bit/s -- 998.1 %CPU
>
>
>
> w/ this patch, we will improve performance by 23%.. even we could improve
> performance by 45.4%, if we use w/patch and w/ kvm dynamic poll. also the
> cost of CPU is much lower than 'idle=poll' case..
I don't question the general idea. I just think pvops isn't the best way
to implement it.
>> Wouldn't a function pointer, maybe guarded
>> by a static key, be enough? A further advantage would be that this would
>> work on other architectures, too.
>
> I assume this feature will be ported to other archs.. a new pvops makes
> code
> clean and easy to maintain. also I tried to add it into existed pvops,
> but it
> doesn't match.
You are aware that pvops is x86 only?
I really don't see the big difference in maintainability compared to the
static key / function pointer variant:
void (*guest_idle_poll_func)(void);
struct static_key guest_idle_poll_key __read_mostly;
static inline void guest_idle_poll(void)
{
if (static_key_false(&guest_idle_poll_key))
guest_idle_poll_func();
}
And KVM would just need to set guest_idle_poll_func and enable the
static key. Works on non-x86 architectures, too.
Juergen
Powered by blists - more mailing lists