[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAC4Lta3Bhh45HLjuMPQam_2WyWsTrLoo+bt5jx48KmBkrY0aAw@mail.gmail.com>
Date: Mon, 24 Mar 2014 19:42:30 +0530
From: Raghavendra KT <raghavendra.kt@...ux.vnet.ibm.com>
To: "Li, Bin (Bin)" <bin.bl.li@...atel-lucent.com>
Cc: KVM <kvm@...r.kernel.org>,
Neel Jatania <neel.jatania@...atel-lucent.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Avi Kiviti <avi@...hat.com>,
Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Mike Galbraith <efault@....de>,
Chris Wright <chrisw@...s-sol.org>, ttracy@...hat.com,
"Nakajima, Jun" <jun.nakajima@...el.com>,
Rik van Riel <riel@...hat.com>
Subject: Re: Enhancement for PLE handler in KVM
On Mon, Mar 3, 2014 at 11:54 PM, Li, Bin (Bin)
<bin.bl.li@...atel-lucent.com> wrote:
> Hello, all.
>
> The PLE handler attempts to determine an alternate vCPU to schedule. In
> some cases the wrong vCPU is scheduled and performance suffers.
>
> This patch allows for the guest OS to signal, using a hypercall, that it's
> starting/ending a critical section. Using this information in the PLE
> handler allows for a more intelligent VCPU scheduling determination to be
> made. The patch only changes the PLE behaviour if this new hypercall
> mechanism is used; if it isn't used, then the existing PLE algorithm
> continues to be used to determine the next vCPU.
>
> Benefit from the patch:
> - the guest OS real time performance being significantly improved when
> using hyper call marking entering and leaving guest OS kernel state.
> - The guest OS system clock jitter measured on on Intel E5 2620 reduced
> from 400ms down to 6ms.
> - The guest OS system lock is set to a 2ms clock interrupt. The jitter is
> measured by the difference between dtsc() value in clock interrupt handler
> and the expectation of tsc value.
> - detail of test report is attached as reference.
>
> Path details:
>
> From 77edfa193a4e29ab357ec3b1e097f8469d418507 Mon Sep 17 00:00:00 2001
>
> From: Bin BL LI <bin.bl.li@...atel-lucent.com>
>
> Date: Mon, 3 Mar 2014 11:23:35 -0500
>
> Subject: [PATCH] Initial commit
>
> ---
>
> arch/x86/kvm/x86.c | 7 +++++++
>
> include/linux/kvm_host.h | 16 ++++++++++++++++
>
> include/uapi/linux/kvm_para.h | 2 ++
>
> virt/kvm/kvm_main.c | 14 +++++++++++++-
>
> 4 files changed, 38 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>
> index 39c28f0..e735de3 100644
>
> --- a/arch/x86/kvm/x86.c
>
> +++ b/arch/x86/kvm/x86.c
>
> @@ -5582,6 +5582,7 @@ void kvm_arch_exit(void)
>
> int kvm_emulate_halt(struct kvm_vcpu *vcpu)
>
> {
>
> ++vcpu->stat.halt_exits;
>
> + kvm_vcpu_set_holding_lock(vcpu,false);
>
> if (irqchip_in_kernel(vcpu->kvm)) {
>
> vcpu->arch.mp_state = KVM_MP_STATE_HALTED;
>
> return 1;
>
Joining late to comment on this :(.
Seeing that you are trying to set 'holding_lock' in halt handling
path, I am just curious if you could try
https://lkml.org/lkml/2013/7/22/41 to see if you get any benefits. [
We could not get any convincing
benefit during pv patch posting and dropped it].
and regarding SPIN_THRESHOLD tuning, I did some experiment by
dynamically tuning loop count
based on head,tail vaules (for e.g. if we are nearer to the
lock-holder in the queue loop longer), but that
also did not yield much result.
[...]
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists