[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <531F364C.5020406@redhat.com>
Date: Tue, 11 Mar 2014 17:14:04 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: "Li, Bin (Bin)" <bin.bl.li@...atel-lucent.com>,
Marcelo Tosatti <mtosatti@...hat.com>
CC: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"Jatania, Neel (Neel)" <Neel.Jatania@...atel-lucent.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: Enhancement for PLE handler in KVM
Il 11/03/2014 15:12, Li, Bin (Bin) ha scritto:
> - For the guest OS which doesn't use the hyper call interface, there
> will be no impact to them. The proposed ple handler enhancement has
> structured to use the hint only if the guest OS using the new
> proposed hyper call. And it is per VM only.
> The VM which running general guest OS ( linux / windows) will still
> use the today's ple hanlder to boost vCPUs. While the other VM,
> which using new hyper call indicating the lock get and release, the
> ple handler for this VM will boost the lock holder only.
No, if there is a jitter problem we want to fix it for all guest OSes,
not just for those that use a big kernel lock.
> - The main advantage of this proposal is that, it reliably solves
> the problem. Any other option which could prevent the problem from
> happening thoroughly?
You haven't proved this yet. My impression is that, on a
non-overcommitted system, your proposal is exactly the same as a fair
lock with paravirtualization (except more expensive for the lock taker,
even when there is no contention).
I think I understand why, on an overcommitted system, you could still
have jitter with pv ticketlocks and not with your infrastructure. The
reason is that pv ticketlocks do not attempt to donate the quantum to
the lock holder. Is there anything we can do to fix *this*? I would
accept a new hypercall KVM_HC_HALT_AND_YIELD_TO_CPU that takes an APIC
id, donates the quantum to that CPU, and puts the originating CPU in
halted state.
If this is not enough, it's up to you to disprove this and explain why
the two have different jitter characteristics. To do this, you need to
implement paravirtualized fair locks in your kernel (and possibly
halt-and-yield), measure the difference in jitter, *trace what's
happening on the host to characterize the benefits of your solution*, etc.
> - Using hyper call to mark lock status does increase cpu
> consumption. But the impact to the system is very much depending on
> lock usage character in the guest OS.
>
> For the guest OS, which typically doing less frequent kernel lock,
> but longer operation for each kernel lock, the overall impact from
> hype call could *NOT *be an issue.
Again, if there is a jitter problem we want to fix it for all locks,
like we did for pv ticketlocks.
Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists