lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 28 Jul 2020 09:25:24 -0700
From:   Sean Christopherson <sean.j.christopherson@...el.com>
To:     Xiaoyao Li <xiaoyao.li@...el.com>
Cc:     Vitaly Kuznetsov <vkuznets@...hat.com>,
        Chenyi Qiang <chenyi.qiang@...el.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>
Subject: Re: [RFC 2/2] KVM: VMX: Enable bus lock VM exit

On Mon, Jul 27, 2020 at 12:38:53PM +0800, Xiaoyao Li wrote:
> On 7/23/2020 9:21 AM, Sean Christopherson wrote:
> >On Wed, Jul 01, 2020 at 04:49:49PM +0200, Vitaly Kuznetsov wrote:
> >>Xiaoyao Li <xiaoyao.li@...el.com> writes:
> >>>So you want an exit to userspace for every bus lock and leave it all to
> >>>userspace. Yes, it's doable.
> >>
> >>In some cases we may not even want to have a VM exit: think
> >>e.g. real-time/partitioning case when even in case of bus lock we may
> >>not want to add additional latency just to count such events.
> >
> >Hmm, I suspect this isn't all that useful for real-time cases because they'd
> >probably want to prevent the split lock in the first place, e.g. would prefer
> >to use the #AC variant in fatal mode.  Of course, the availability of split
> >lock #AC is a whole other can of worms.
> >
> >But anyways, I 100% agree that this needs either an off-by-default module
> >param or an opt-in per-VM capability.
> >
> 
> Maybe on-by-default or an opt-out per-VM capability?
> Turning it on introduces no overhead if no bus lock happens in guest but
> gives KVM the capability to track every potential bus lock. If user doesn't
> want the extra latency due to bus lock VM exit, it's better try to fix the
> bus lock, which also incurs high latency.

Except that I doubt the physical system owner and VM owner are the same
entity in the vast majority of KVM use cases.  So yeah, in a perfect world
the guest application that's causing bus locks would be fixed, but in
practice there is likely no sane way for the KVM owner to inform the guest
application owner that their application is broken, let alone fix said
application.

The caveat would be that we may need to enable this by default if the host
kernel policy mandates it.

> >>I'd suggest we make the new capability tri-state:
> >>- disabled (no vmexit, default)
> >>- stats only (what this patch does)
> >>- userspace exit
> >>But maybe this is an overkill, I'd like to hear what others think.
> >
> >Userspace exit would also be interesting for debug.  Another throttling
> >option would be schedule() or cond_reched(), though that's probably getting
> >into overkill territory.
> >
> 
> We're going to leverage host's policy, i.e., calling handle_user_bus_lock(),
> for throttling, as proposed in https://lkml.kernel.org/r/1595021700-68460-1-git-send-email-fenghua.yu@intel.com
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ