lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 04 Jan 2019 18:09:17 +0800
From:   Wei Wang <wei.w.wang@...el.com>
To:     Jim Mattson <jmattson@...gle.com>
CC:     LKML <linux-kernel@...r.kernel.org>,
        kvm list <kvm@...r.kernel.org>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Andi Kleen <ak@...ux.intel.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Kan Liang <kan.liang@...el.com>,
        Ingo Molnar <mingo@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        like.xu@...el.com, Jann Horn <jannh@...gle.com>,
        arei.gonglei@...wei.com
Subject: Re: [PATCH v4 04/10] KVM/x86: intel_pmu_lbr_enable

On 01/03/2019 11:34 PM, Jim Mattson wrote:
> On Wed, Jan 2, 2019 at 11:16 PM Wei Wang <wei.w.wang@...el.com> wrote:
>> On 01/03/2019 07:26 AM, Jim Mattson wrote:
>>> On Wed, Dec 26, 2018 at 2:01 AM Wei Wang <wei.w.wang@...el.com> wrote:
>>>> The lbr stack is architecturally specific, for example, SKX has 32 lbr
>>>> stack entries while HSW has 16 entries, so a HSW guest running on a SKX
>>>> machine may not get accurate perf results. Currently, we forbid the
>>>> guest lbr enabling when the guest and host see different lbr stack
>>>> entries.
>>> How do you handle live migration?
>> This feature is gated by the QEMU "lbr=true" option.
>> So if the lbr fails to work on the destination machine,
>> the destination side QEMU wouldn't be able to boot,
>> and migration will not happen.
> Yes, but then what happens?
>
> Fast forward to, say, 2021. You're decommissioning all Broadwell
> servers in your data center. You have to migrate the running VMs off
> of those Broadwell systems onto newer hardware. But, with the current
> implementation, the migration cannot happen. So, what do you do? I
> suppose you just never enable the feature in the first place. Right?

I'm not sure if that's the way people would do with their data centers.
What would be the point of decommissioning all the BDW machines when
there are important BDW VMs running?

The "lbr=true" option can also be disabled via QMP, which will disable the
kvm side lbr support. So if you really want to deal with the above case,
you could first disable the lbr feature on the source side, and then 
boot the
destination side QEMU without "lbr=true". The lbr feature will not be 
available
to use by the guest at the time you decide to migrate the guest to a
non-compatible physical machine.

The point of this patch is: If we couldn't offer our users accurate
lbr results, we'd better to have the feature disabled rather than
offering wrong results to confuse them.


Best,
Wei

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ