[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7h7e7wkm0c.fsf@crazypad.dinechin.lan>
Date: Fri, 02 Aug 2019 10:30:11 +0200
From: Christophe de Dinechin <christophe.de.dinechin@...il.com>
To: Dario Faggioli <dfaggioli@...e.com>
Cc: Wanpeng Li <kernellwp@...il.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH] KVM: Disable wake-affine vCPU process to mitigate lock holder preemption
Dario Faggioli writes:
> On Tue, 2019-07-30 at 17:33 +0800, Wanpeng Li wrote:
>> However, in multiple VMs over-subscribe virtualization scenario, it
>> increases
>> the probability to incur vCPU stacking which means that the sibling
>> vCPUs from
>> the same VM will be stacked on one pCPU. I test three 80 vCPUs VMs
>> running on
>> one 80 pCPUs Skylake server(PLE is supported), the ebizzy score can
>> increase 17%
>> after disabling wake-affine for vCPU process.
>>
> Can't we achieve this by removing SD_WAKE_AFFINE from the relevant
> scheduling domains? By acting on
> /proc/sys/kernel/sched_domain/cpuX/domainY/flags, I mean?
>
> Of course this will impact all tasks, not only KVM vcpus. But if the
> host does KVM only anyway...
Even a host dedicated to KVM has many non-KVM processes. I suspect an
increasing number of hosts will be split between VMs and containers.
>
> Regards
--
Cheers,
Christophe de Dinechin
Powered by blists - more mailing lists