[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230623144315.GC3436214@ls.amr.corp.intel.com>
Date: Fri, 23 Jun 2023 07:43:15 -0700
From: Isaku Yamahata <isaku.yamahata@...il.com>
To: "Huang, Kai" <kai.huang@...el.com>
Cc: "isaku.yamahata@...il.com" <isaku.yamahata@...il.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"liam.merwick@...cle.com" <liam.merwick@...cle.com>,
"tobin@....com" <tobin@....com>,
"alpergun@...gle.com" <alpergun@...gle.com>,
"Luck, Tony" <tony.luck@...el.com>,
"jmattson@...gle.com" <jmattson@...gle.com>,
"Lutomirski, Andy" <luto@...nel.org>,
"ak@...ux.intel.com" <ak@...ux.intel.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"srinivas.pandruvada@...ux.intel.com"
<srinivas.pandruvada@...ux.intel.com>,
"slp@...hat.com" <slp@...hat.com>,
"dovmurik@...ux.ibm.com" <dovmurik@...ux.ibm.com>,
"michael.roth@....com" <michael.roth@....com>,
"peterz@...radead.org" <peterz@...radead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"pgonda@...gle.com" <pgonda@...gle.com>,
"thomas.lendacky@....com" <thomas.lendacky@....com>,
"rientjes@...gle.com" <rientjes@...gle.com>,
"Wang, Zhi A" <zhi.a.wang@...el.com>,
"x86@...nel.org" <x86@...nel.org>, "bp@...en8.de" <bp@...en8.de>,
"Annapurve, Vishal" <vannapurve@...gle.com>,
"dgilbert@...hat.com" <dgilbert@...hat.com>,
"Christopherson,, Sean" <seanjc@...gle.com>,
"vkuznets@...hat.com" <vkuznets@...hat.com>,
"marcorr@...gle.com" <marcorr@...gle.com>,
"vbabka@...e.cz" <vbabka@...e.cz>,
"ashish.kalra@....com" <ashish.kalra@....com>,
"linux-coco@...ts.linux.dev" <linux-coco@...ts.linux.dev>,
"nikunj.dadhania@....com" <nikunj.dadhania@....com>,
"Rodel, Jorg" <jroedel@...e.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"sathyanarayanan.kuppuswamy@...ux.intel.com"
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
"hpa@...or.com" <hpa@...or.com>,
"kirill@...temov.name" <kirill@...temov.name>,
"jarkko@...nel.org" <jarkko@...nel.org>,
"ardb@...nel.org" <ardb@...nel.org>,
"linux-crypto@...r.kernel.org" <linux-crypto@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>
Subject: Re: [PATCH RFC v9 04/51] KVM: x86: Determine shared/private faults
using a configurable mask
On Thu, Jun 22, 2023 at 11:52:56PM +0000,
"Huang, Kai" <kai.huang@...el.com> wrote:
> On Thu, 2023-06-22 at 16:39 -0700, Isaku Yamahata wrote:
> > On Thu, Jun 22, 2023 at 10:31:08PM +0000,
> > "Huang, Kai" <kai.huang@...el.com> wrote:
> >
> > > > If there are better ways to handle *how*
> > > > that's done I don't have any complaints there, but moving/adding bits
> > > > to GPA/error_flags after fault time just seems unecessary to me when
> > > > fault->is_private field can serve that purpose just as well.
> > >
> > > Perhaps you missed my point. My point is arch.mmu_private_fault_mask and
> > > arch.gfn_shared_mask seem redundant because the logic around them are exactly
> > > the same. I do believe we should have fault->is_private passing to the common
> > > MMU code.
> > >
> > > In fact, now I am wondering why we need to have "mmu_private_fault_mask" and
> > > "gfn_shared_mask" in _common_ KVM MMU code. We already have enough mechanism in
> > > KVM common code:
> > >
> > > 1) fault->is_private
> > > 2) kvm_mmu_page_role.private
> > > 3) an Xarray to tell whether a GFN is private or shared
> > >
> > > I am not convinced that we need to have "mmu_private_fault_mask" and
> > > "gfn_shared_mask" in common KVM MMU code. Instead, they can be in AMD and
> > > Intel's vendor code.
> > >
> > > Maybe it makes sense to have "gfn_shared_mask" in the KVM common code so that
> > > the fault handler can just strip away the "shared bit" at the very beginning (at
> > > least for TDX), but for the rest of the time I think we should already have
> > > enough infrastructure to handle private/shared mapping.
> > >
> > > Btw, one minor issue is, if I recall correctly, for TDX the shared bit must be
> > > applied to the GFN for shared mapping in normal EPT. I guess AMD doesn't need
> > > that for shared mapping. So "gfn_shared_mask" maybe useful in this case, but
> > > w/o it I believe we can also achieve in another way via vendor callback.
> >
> >
> > "2) kvm_mmu_page_role.private" above has different meaning.
> >
> > a). The fault is private or not.
> > b). page table the fault handler is walking is private or conventional.
> >
> > a.) is common for SNP, TDX and PROTECTED_VM. It makes sense in
> > kvm_mmu_do_page_fault() and __kvm_faultin_pfn(). After kvm_faultin_pfn(), the
> > fault handler can mostly forget it for SNP and PROTECTED_VM. (large page
> > adjustment needs it, though.) This is what we're discussing in this thread.
> >
> > b.) is specific to TDX. TDX KVM MMU introduces one more page table.
> >
> >
>
> I don't buy the last sentence. Even it's not necessarily for AMD from
> hardware's perspective, but the concept remains true for AMD too. So why cannot
> we use it for AMD?
We can use it for AMD. Let me rephrase it.
TDX only uses it now. SEV-SNP may or may not use it at their option.
--
Isaku Yamahata <isaku.yamahata@...il.com>
Powered by blists - more mailing lists