lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200527212200.GH48741@kernel.org>
Date:   Thu, 28 May 2020 00:22:00 +0300
From:   Mike Rapoport <rppt@...nel.org>
To:     Dave Hansen <dave.hansen@...el.com>
Cc:     Liran Alon <liran.alon@...cle.com>,
        "Kirill A. Shutemov" <kirill@...temov.name>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Andy Lutomirski <luto@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Sean Christopherson <sean.j.christopherson@...el.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        David Rientjes <rientjes@...gle.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Kees Cook <keescook@...omium.org>,
        Will Drewry <wad@...omium.org>,
        "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
        "Kleen, Andi" <andi.kleen@...el.com>, x86@...nel.org,
        kvm@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [RFC 00/16] KVM protected memory extension

On Wed, May 27, 2020 at 08:45:33AM -0700, Dave Hansen wrote:
> On 5/26/20 4:38 AM, Mike Rapoport wrote:
> > On Tue, May 26, 2020 at 01:16:14PM +0300, Liran Alon wrote:
> >> On 26/05/2020 9:17, Mike Rapoport wrote:
> >>> On Mon, May 25, 2020 at 04:47:18PM +0300, Liran Alon wrote:
> >>>> On 22/05/2020 15:51, Kirill A. Shutemov wrote:
> >>>>
> >>> Out of curiosity, do we actually have some numbers for the "non-trivial
> >>> performance cost"? For instance for KVM usecase?
> >>>
> >> Dig into XPFO mailing-list discussions to find out...
> >> I just remember that this was one of the main concerns regarding XPFO.
> >
> > The XPFO benchmarks measure total XPFO cost, and huge share of it comes
> > from TLB shootdowns.
> 
> Yes, TLB shootdown when pages transition between owners is huge.  The
> XPFO folks did a lot of work to try to optimize some of this overhead
> away.  But, it's still a concern.
> 
> The concern with XPFO was that it could affect *all* application page
> allocation.  This approach cheats a bit and only goes after guest VM
> pages.  It's significantly more work to allocate a page and map it into
> a guest than it is to, for instance, allocate an anonymous user page.
> That means that the *additional* overhead of things like this for guest
> memory matter a lot less.
> 
> > It's not exactly measurement of the imapct of the direct map
> > fragmentation to workload running inside a vitrual machine.
> 
> While the VM *itself* is running, there is zero overhead.  The host
> direct map is not used at *all*.  The guest and host TLB entries share
> the same space in the TLB so there could be some increased pressure on
> the TLB, but that's a really secondary effect.  It would also only occur
> if the guest exits and the host runs and starts evicting TLB entries.
> 
> The other effects I could think of would be when the guest exits and the
> host is doing some work for the guest, like emulation or something.  The
> host would see worse TLB behavior because the host is using the
> (fragmented) direct map.
> 
> But, both of those things require VMEXITs.  The more exits, the more
> overhead you _might_ observe.  What I've been hearing from KVM folks is
> that exits are getting more and more rare and the hardware designers are
> working hard to minimize them.

Right, when guest stays in the guest mode, there is no overhead. But
guests still exit sometimes and I was wondering if anybody had measured
difference in the overhead with different page size used for the host's
direct map. 

My guesstimate is that the overhead will not differ much for most
workloads. But still, it's still interesting to *know* what is it.

> That's especially good news because it means that even if the
> situation
> isn't perfect, it's only bound to get *better* over time, not worse.

The processors have been aggressively improving performance for decades
and see where are we know because of it ;-)

-- 
Sincerely yours,
Mike.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ