[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53F61EAC.2000004@redhat.com>
Date: Thu, 21 Aug 2014 18:30:36 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Radim Krčmář <rkrcmar@...hat.com>,
kvm@...r.kernel.org
CC: linux-kernel@...r.kernel.org, Gleb Natapov <gleb@...nel.org>,
Raghavendra KT <raghavendra.kt@...ux.vnet.ibm.com>,
Vinod Chegu <chegu_vinod@...com>,
Hui-Zhi Zhao <hui-zhi.zhao@...com>,
Christian Borntraeger <borntraeger@...ibm.com>,
Lisa Mitchell <lisa.mitchell@...com>
Subject: Re: [PATCH v3 0/7] Dynamic Pause Loop Exiting window.
Il 21/08/2014 18:08, Radim Krčmář ha scritto:
> v2 -> v3:
> * copy&paste frenzy [v3 4/7] (split modify_ple_window)
> * commented update_ple_window_actual_max [v3 4/7]
> * renamed shrinker to modifier [v3 4/7]
> * removed an extraneous max(new, ple_window) [v3 4/7] (should have been in v2)
> * changed tracepoint argument type, printing and macro abstractions [v3 5/7]
> * renamed ple_t to ple_int [v3 6/7] (visible in modinfo)
> * intelligent updates of ple_window [v3 7/7]
>
> ---
> v1 -> v2:
> * squashed [v1 4/9] and [v1 5/9] (clamping)
> * dropped [v1 7/9] (CPP abstractions)
> * merged core of [v1 9/9] into [v1 4/9] (automatic maximum)
> * reworked kernel_param_ops: closer to pure int [v2 6/6]
> * introduced ple_window_actual_max & reworked clamping [v2 4/6]
> * added seqlock for parameter modifications [v2 6/6]
>
> ---
> PLE does not scale in its current form. When increasing VCPU count
> above 150, one can hit soft lockups because of runqueue lock contention.
> (Which says a lot about performance.)
>
> The main reason is that kvm_ple_loop cycles through all VCPUs.
> Replacing it with a scalable solution would be ideal, but it has already
> been well optimized for various workloads, so this series tries to
> alleviate one different major problem while minimizing a chance of
> regressions: we have too many useless PLE exits.
>
> Just increasing PLE window would help some cases, but it still spirals
> out of control. By increasing the window after every PLE exit, we can
> limit the amount of useless ones, so we don't reach the state where CPUs
> spend 99% of the time waiting for a lock.
>
> HP confirmed that this series prevents soft lockups and TSC sync errors
> on large guests.
Hi,
I'm not sure of the usefulness of patch 6, so I'm going to drop it.
I'll keep it in my local junkyard branch in case it's going to be useful
in some scenario I didn't think of.
Patch 7 can be easily rebased, so no need to repost (and I might even
squash it into patch 3, what do you think?).
Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists