[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CA+VK+GOL_sY5aWYijg1_X6VgvDtFbRX2ymuSXhsZeZH2_tO2qg@mail.gmail.com>
Date: Fri, 17 May 2019 17:05:32 -0700
From: Jonathan Adams <jwadams@...gle.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Andy Lutomirski <luto@...capital.net>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Alexandre Chartre <alexandre.chartre@...cle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Radim Krcmar <rkrcmar@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
kvm list <kvm@...r.kernel.org>, X86 ML <x86@...nel.org>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Jan Setje-Eilers <jan.setjeeilers@...cle.com>,
Liran Alon <liran.alon@...cle.com>
Subject: Re: [RFC KVM 18/27] kvm/isolation: function to copy page table
entries for percpu buffer
On Tue, May 14, 2019 at 3:38 PM Sean Christopherson
<sean.j.christopherson@...el.com> wrote:
> On Tue, May 14, 2019 at 02:55:18PM -0700, Andy Lutomirski wrote:
> > > On May 14, 2019, at 2:06 PM, Sean Christopherson <sean.j.christopherson@...el.com> wrote:
> > >> On Tue, May 14, 2019 at 01:33:21PM -0700, Andy Lutomirski wrote:
> > >> I suspect that the context switch is a bit of a red herring. A
> > >> PCID-don't-flush CR3 write is IIRC under 300 cycles. Sure, it's slow,
> > >> but it's probably minor compared to the full cost of the vm exit. The
> > >> pain point is kicking the sibling thread.
> > >
> > > Speaking of PCIDs, a separate mm for KVM would mean consuming another
> > > ASID, which isn't good.
> >
> > I’m not sure we care. We have many logical address spaces (two per mm plus a
> > few more). We have 4096 PCIDs, but we only use ten or so. And we have some
> > undocumented number of *physical* ASIDs with some undocumented mechanism by
> > which PCID maps to a physical ASID.
>
> Yeah, I was referring to physical ASIDs.
>
> > I don’t suppose you know how many physical ASIDs we have?
>
> Limited number of physical ASIDs. I'll leave it at that so as not to
> disclose something I shouldn't.
>
> > And how it interacts with the VPID stuff?
>
> VPID and PCID get factored into the final ASID, i.e. changing either one
> results in a new ASID. The SDM's oblique way of saying that:
>
> VPIDs and PCIDs (see Section 4.10.1) can be used concurrently. When this
> is done, the processor associates cached information with both a VPID and
> a PCID. Such information is used only if the current VPID and PCID both
> match those associated with the cached information.
>
> E.g. enabling PTI in both the host and guest consumes four ASIDs just to
> run a single task in the guest:
>
> - VPID=0, PCID=kernel
> - VPID=0, PCID=user
> - VPID=1, PCID=kernel
> - VPID=1, PCID=user
>
> The impact of consuming another ASID for KVM would likely depend on both
> the guest and host configurations/worloads, e.g. if the guest is using a
> lot of PCIDs then it's probably a moot point. It's something to keep in
> mind though if we go down this path.
One answer to that would be to have the KVM page tables use the same
PCID as the normal user-mode PTI page tables. It's not ideal (since
the qemu/whatever process can see some kernel data via meltdown it
wouldn't be able to normally see), but might be an option to
investigate.
Cheers,
- jonathan
Powered by blists - more mailing lists