[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+CwYC=rpEbe_OD+H6tDAFy4xYP6+JKRN2YHeH0TWt5234Q@mail.gmail.com>
Date: Fri, 2 Aug 2019 08:51:39 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: Dario Faggioli <dfaggioli@...e.com>
Cc: LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH] KVM: Disable wake-affine vCPU process to mitigate lock
holder preemption
On Thu, 1 Aug 2019 at 20:57, Dario Faggioli <dfaggioli@...e.com> wrote:
>
> On Tue, 2019-07-30 at 17:33 +0800, Wanpeng Li wrote:
> > However, in multiple VMs over-subscribe virtualization scenario, it
> > increases
> > the probability to incur vCPU stacking which means that the sibling
> > vCPUs from
> > the same VM will be stacked on one pCPU. I test three 80 vCPUs VMs
> > running on
> > one 80 pCPUs Skylake server(PLE is supported), the ebizzy score can
> > increase 17%
> > after disabling wake-affine for vCPU process.
> >
> Can't we achieve this by removing SD_WAKE_AFFINE from the relevant
> scheduling domains? By acting on
> /proc/sys/kernel/sched_domain/cpuX/domainY/flags, I mean?
>
> Of course this will impact all tasks, not only KVM vcpus. But if the
> host does KVM only anyway...
Yes, not only kvm host and dedicated kvm host, unless introduce
per-process flags, otherwise can't appeal to both.
Regards,
Wanpeng Li
Powered by blists - more mailing lists