[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171115095425.2hsgpfomdmdru7ke@hirez.programming.kicks-ass.net>
Date: Wed, 15 Nov 2017 10:54:25 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Wanpeng Li <kernellwp@...il.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
kvm <kvm@...r.kernel.org>, Radim Kr??m???? <rkrcmar@...hat.com>,
Wanpeng Li <wanpeng.li@...mail.com>,
Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [PATCH v2 2/4] KVM: Add paravirt remote TLB flush
On Wed, Nov 15, 2017 at 04:43:32PM +0800, Wanpeng Li wrote:
> Hi Peterz,
>
> I found big performance difference as I discuss with you several days ago.
>
> ebizzy -M
> vanilla static/local cpumask per-cpu cpumask
> 8 vCPUs 10152 10083 10117
> 16 vCPUs 1224 4866 10008
> 24 vCPUs 1109 3871 9928
> 32 vCPUs 1025 3375 9811
>
> In addition, I can observe ~50% perf top time is occupied by
> smp_call_function_many(), ~30% perf top time is occupied by
> call_function_interrupt() in the guest when running ebizzy for
> static/local cpumask variable. However, I almost can't observe these
> IPI stuffs after changing to per-cpu variable. Any opinions?
That doesn't really make sense.. :/
So a single static variable is broken (multiple CPUs can call
flush_tlb_others() concurrently and overwrite each others masks). But I
don't see why a per-cpu variable would be much slower than an on-stack
variable.
Powered by blists - more mailing lists