[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALMp9eR0rkF-DuOWyB16q5r8y9DrefsMd1h4GfDkA8nmpWjMEg@mail.gmail.com>
Date: Thu, 3 Jan 2019 07:34:44 -0800
From: Jim Mattson <jmattson@...gle.com>
To: Wei Wang <wei.w.wang@...el.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
kvm list <kvm@...r.kernel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Andi Kleen <ak@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Kan Liang <kan.liang@...el.com>,
Ingo Molnar <mingo@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
like.xu@...el.com, Jann Horn <jannh@...gle.com>,
arei.gonglei@...wei.com
Subject: Re: [PATCH v4 04/10] KVM/x86: intel_pmu_lbr_enable
On Wed, Jan 2, 2019 at 11:16 PM Wei Wang <wei.w.wang@...el.com> wrote:
>
> On 01/03/2019 07:26 AM, Jim Mattson wrote:
> > On Wed, Dec 26, 2018 at 2:01 AM Wei Wang <wei.w.wang@...el.com> wrote:
> >> The lbr stack is architecturally specific, for example, SKX has 32 lbr
> >> stack entries while HSW has 16 entries, so a HSW guest running on a SKX
> >> machine may not get accurate perf results. Currently, we forbid the
> >> guest lbr enabling when the guest and host see different lbr stack
> >> entries.
> > How do you handle live migration?
>
> This feature is gated by the QEMU "lbr=true" option.
> So if the lbr fails to work on the destination machine,
> the destination side QEMU wouldn't be able to boot,
> and migration will not happen.
Yes, but then what happens?
Fast forward to, say, 2021. You're decommissioning all Broadwell
servers in your data center. You have to migrate the running VMs off
of those Broadwell systems onto newer hardware. But, with the current
implementation, the migration cannot happen. So, what do you do? I
suppose you just never enable the feature in the first place. Right?
Powered by blists - more mailing lists