[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <690ae2cb2099ac3e13c3da530a1b4a4eb5bafc5a.camel@intel.com>
Date: Tue, 10 May 2022 22:42:10 +1200
From: Kai Huang <kai.huang@...el.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Dave Hansen <dave.hansen@...el.com>,
Sathyanarayanan Kuppuswamy
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H . Peter Anvin" <hpa@...or.com>, Tony Luck <tony.luck@...el.com>,
Andi Kleen <ak@...ux.intel.com>,
Wander Lairson Costa <wander@...hat.com>,
Isaku Yamahata <isaku.yamahata@...il.com>,
marcelo.cerri@...onical.com, tim.gardner@...onical.com,
khalid.elmously@...onical.com, philip.cox@...onical.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 3/3] x86/tdx: Add Quote generation support
On Tue, 2022-05-10 at 11:54 +1200, Kai Huang wrote:
> On Mon, 2022-05-09 at 15:09 +0300, Kirill A. Shutemov wrote:
> > On Mon, May 09, 2022 at 03:37:22PM +1200, Kai Huang wrote:
> > > On Sat, 2022-05-07 at 03:42 +0300, Kirill A. Shutemov wrote:
> > > > On Fri, May 06, 2022 at 12:11:03PM +1200, Kai Huang wrote:
> > > > > Kirill, what's your opinion?
> > > >
> > > > I said before that I think DMA API is the right tool here.
> > > >
> > > > Speculation about future of DMA in TDX is irrelevant here. If semantics
> > > > change we will need to re-evaluate all users. VirtIO uses DMA API and it
> > > > is conceptually the same use-case: communicate with the host.
> > >
> > > Virtio is designed for device driver to use, so it's fine to use DMA API. And
> > > real DMA can happen to the virtio DMA buffers. Attestation doesn't have such
> > > assumption.
> >
> > Whether attestation driver uses struct device is implementation detail.
> > I don't see what is you point.
>
> No real DMA is involved in attestation.
>
> >
> > > So I don't see why TD guest kernel cannot have a simple protocol to vmap() a
> > > page (or couple of pages) as shared on-demand, like below:
> > >
> > > page = alloc_page();
> > >
> > > addr = vmap(page, pgprot_decrypted(PAGE_KERNEL));
> > >
> > > clflush_cache_range(page_address(page), PAGE_SIZE);
> > >
> > > MapGPA(page_to_phys(page) | cc_mkdec(0), PAGE_SIZE);
> > >
> > > And we can even avoid above clflush_cache_range() if I understand correctly.
> > >
> > > Or I missed something?
> >
> > For completeness, cover free path too. Are you going to opencode page
> > accept too?
>
> Call __tdx_module_call(TDX_ACCEPT_PAGE, ...) right after MapGPA() to convert
> back to private. I don't think there is any problem?
>
> >
> > Private->Shared conversion is destructive. You have to split SEPT, flush
> > TLB. Backward conversion even more costly.
>
> I think I won't call it destructive.
>
> And I suggested before, we can allocate a default size buffer (i.e. 4 pages),
> which is large enough to cover all requests for now, during driver
> initialization. This avoids IOCTL time conversion. We should still have code
> in the IOCTL to check the request buffer size and when it is larger than the
> default, the old should be freed a larger one should be allocated. But for now
> this code path will never happen.
>
> Btw above is based on assumption that we don't support concurrent IOCTLs. This
> version Sathya somehow changed to support concurrent IOCTLs but this was a
> surprise as I thought we somehow agreed we don't need to support this.
Hi Dave,
Sorry I forgot to mention that GHCI 1.5 defines a generic TDVMCALL<Service> for
a TD to communicate with VMM or another TD or some service in the host. This
TDVMCALL can support many sub-commands. For now only sub-commands for TD
migration is defined, but we can have more.
For this, we cannot assume the size of the command buffer, and I don't see why
we don't want to support concurrent TDVMCALLs. So looks from long term, we will
very likely need IOCTL time buffer private-shared conversion.
--
Thanks,
-Kai
Powered by blists - more mailing lists