lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190514223823.GE1977@linux.intel.com>
Date:   Tue, 14 May 2019 15:38:23 -0700
From:   Sean Christopherson <sean.j.christopherson@...el.com>
To:     Andy Lutomirski <luto@...capital.net>
Cc:     Andy Lutomirski <luto@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Alexandre Chartre <alexandre.chartre@...cle.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krcmar <rkrcmar@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        "H. Peter Anvin" <hpa@...or.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        kvm list <kvm@...r.kernel.org>, X86 ML <x86@...nel.org>,
        Linux-MM <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
        jan.setjeeilers@...cle.com, Liran Alon <liran.alon@...cle.com>,
        Jonathan Adams <jwadams@...gle.com>
Subject: Re: [RFC KVM 18/27] kvm/isolation: function to copy page table
 entries for percpu buffer

On Tue, May 14, 2019 at 02:55:18PM -0700, Andy Lutomirski wrote:
> 
> > On May 14, 2019, at 2:06 PM, Sean Christopherson <sean.j.christopherson@...el.com> wrote:
> > 
> >> On Tue, May 14, 2019 at 01:33:21PM -0700, Andy Lutomirski wrote:
> >> I suspect that the context switch is a bit of a red herring.  A
> >> PCID-don't-flush CR3 write is IIRC under 300 cycles.  Sure, it's slow,
> >> but it's probably minor compared to the full cost of the vm exit.  The
> >> pain point is kicking the sibling thread.
> > 
> > Speaking of PCIDs, a separate mm for KVM would mean consuming another
> > ASID, which isn't good.
> 
> I’m not sure we care. We have many logical address spaces (two per mm plus a
> few more).  We have 4096 PCIDs, but we only use ten or so.  And we have some
> undocumented number of *physical* ASIDs with some undocumented mechanism by
> which PCID maps to a physical ASID.

Yeah, I was referring to physical ASIDs.

> I don’t suppose you know how many physical ASIDs we have?

Limited number of physical ASIDs.  I'll leave it at that so as not to
disclose something I shouldn't.

> And how it interacts with the VPID stuff?

VPID and PCID get factored into the final ASID, i.e. changing either one
results in a new ASID.  The SDM's oblique way of saying that:

  VPIDs and PCIDs (see Section 4.10.1) can be used concurrently. When this
  is done, the processor associates cached information with both a VPID and
  a PCID. Such information is used only if the current VPID and PCID both
  match those associated with the cached information.

E.g. enabling PTI in both the host and guest consumes four ASIDs just to
run a single task in the guest:

  - VPID=0, PCID=kernel
  - VPID=0, PCID=user
  - VPID=1, PCID=kernel
  - VPID=1, PCID=user

The impact of consuming another ASID for KVM would likely depend on both
the guest and host configurations/worloads, e.g. if the guest is using a
lot of PCIDs then it's probably a moot point.  It's something to keep in
mind though if we go down this path.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ