[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f9c71fde-714e-9e0a-48ec-05be1ce6d76b@intel.com>
Date: Fri, 18 Mar 2022 08:53:27 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
luto@...nel.org, peterz@...radead.org
Cc: sathyanarayanan.kuppuswamy@...ux.intel.com, aarcange@...hat.com,
ak@...ux.intel.com, dan.j.williams@...el.com, david@...hat.com,
hpa@...or.com, jgross@...e.com, jmattson@...gle.com,
joro@...tes.org, jpoimboe@...hat.com, knsathya@...nel.org,
pbonzini@...hat.com, sdeep@...are.com, seanjc@...gle.com,
tony.luck@...el.com, vkuznets@...hat.com, wanpengli@...cent.com,
thomas.lendacky@....com, brijesh.singh@....com, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCHv7 27/30] x86/mm: Make DMA memory shared for TD guest
On 3/18/22 08:30, Kirill A. Shutemov wrote:
> Intel TDX doesn't allow VMM to directly access guest private memory.
> Any memory that is required for communication with the VMM must be
> shared explicitly. The same rule applies for any DMA to and from the
> TDX guest. All DMA pages have to be marked as shared pages. A generic way
> to achieve this without any changes to device drivers is to use the
> SWIOTLB framework.
>
> In TDX guest, CC_ATTR_GUEST_MEM_ENCRYPT is set. It makes all DMA to be
> rerouted via SWIOTLB (see pci_swiotlb_detect()). mem_encrypt_init()
> generalized to cover TDX. It makes SWIOTLB buffer shared.
It would be nice to have one transition paragraph to link the last one
to this:
The previous patch ("Add support for TDX shared memory") gave TDX guests
the _ability_ to make some pages shared, but did not actually make any
pages shared. This actually marks SWIOTLB buffers *as* shared.
Start returning true for cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) in
TDX guests. This has several implications:
* Allows the existing mem_encrypt_init() to be used for TDX which
sets SWIOTLB buffers shared (aka. "decrypted").
* Ensures that all DMA is routed via the SWIOTLB mechanism (see
pci_swiotlb_detect())
> Stop selecting DYNAMIC_PHYSICAL_MASK directly. It will get set
> indirectly by selcting X86_MEM_ENCRYPT.
^ selecting
> mem_encrypt_init() is currently under an AMD-specific #ifdef. Move it to
> a generic area of the header.
That new paragraph was kinda funky. With the changelog improvements above:
Reviewed-by: Dave Hansen <dave.hansen@...ux.intel.com>
Powered by blists - more mailing lists