[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6d9e85ac5768e920805f121eeaff1360f3b257df.camel@suse.com>
Date: Thu, 01 Aug 2019 14:39:34 +0200
From: Dario Faggioli <dfaggioli@...e.com>
To: Paolo Bonzini <pbonzini@...hat.com>,
Wanpeng Li <kernellwp@...il.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Cc: Radim Krčmář <rkrcmar@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH] KVM: Disable wake-affine vCPU process to mitigate lock
holder preemption
On Tue, 2019-07-30 at 13:46 +0200, Paolo Bonzini wrote:
> On 30/07/19 11:33, Wanpeng Li wrote:
> > When qemu/other vCPU inject virtual interrupt to guest through
> > waking up one
> > sleeping vCPU, it increases the probability to stack vCPUs/qemu by
> > scheduler
> > wake-affine. vCPU stacking issue can greately inceases the lock
> > synchronization
> > latency in a virtualized environment. This patch disables wake-
> > affine vCPU
> > process to mitigtate lock holder preemption.
>
> There is no guarantee that the vCPU remains on the thread where it's
> created, so the patch is not enough.
>
> If many vCPUs are stacked on the same pCPU, why doesn't the wake_cap
> kick in sooner or later?
>
Assuming it actually is the case that vcpus *do* get stacked *and* that
wake_cap() *doesn't* kick in, maybe it could be because of this check?
/* Minimum capacity is close to max, no need to abort wake_affine */
if (max_cap - min_cap < max_cap >> 3)
return 0;
Regards
--
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists