lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201216170740.d73xomotx4c3oxql@amd.com>
Date:   Wed, 16 Dec 2020 11:07:40 -0600
From:   Michael Roth <michael.roth@....com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     Andy Lutomirski <luto@...capital.net>,
        Sean Christopherson <seanjc@...gle.com>,
        "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        "x86@...nel.org" <x86@...nel.org>,
        "H . Peter Anvin" <hpa@...or.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "Lendacky, Thomas" <Thomas.Lendacky@....com>,
        Andy Lutomirski <luto@...nel.org>
Subject: Re: [PATCH v2] KVM: SVM: use vmsave/vmload for saving/restoring
 additional host state

On Wed, Dec 16, 2020 at 04:23:22PM +0100, Paolo Bonzini wrote:
> On 16/12/20 16:12, Michael Roth wrote:
> > It looks like it does save us ~20-30 cycles vs. vmload, but maybe not
> > enough to justify the added complexity. Additionally, since we still
> > need to call vmload when we exit to userspace, it ends up being a bit
> > slower for this particular workload at least. So for now I'll plan on
> > sticking to vmload'ing after vmexit and moving that to the asm code
> > if there are no objections.
> 
> Yeah, agreed.  BTW you can use "./x86/run x86/vmexit.flat" from
> kvm-unit-tests to check the numbers for a wide range of vmexit paths.

Wasn't aware of that, this looks really useful. Thanks!

> 
> Paolo
> 
> > current v2 patch, sample 1
> >    ioctl entry: 1204722748832
> >    pre-run:     1204722749408 ( +576)
> >    post-run:    1204722750784 (+1376)
> >    ioctl exit:  1204722751360 ( +576)
> >    total cycles:         2528
> > 
> > current v2 patch, sample 2
> >    ioctl entry: 1204722754784
> >    pre-vmrun:   1204722755360 ( +576)
> >    post-vmrun:  1204722756720 (+1360)
> >    ioctl exit:  1204722757312 ( +592)
> >    total cycles          2528
> > 
> > wrgsbase, sample 1
> >    ioctl entry: 1346624880336
> >    pre-vmrun:   1346624880912 ( +576)
> >    post-vmrun:  1346624882256 (+1344)
> >    ioctl exit:  1346624882912 ( +656)
> >    total cycles          2576
> > 
> > wrgsbase, sample 2
> >    ioctl entry: 1346624886272
> >    pre-vmrun:   1346624886832 ( +560)
> >    post-vmrun:  1346624888176 (+1344)
> >    ioctl exit:  1346624888816 ( +640)
> >    total cycles:         2544
> > 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ