[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230217131949.oj4jz4dbvhyen5rl@box.shutemov.name>
Date: Fri, 17 Feb 2023 16:19:49 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Dexuan Cui <decui@...rosoft.com>
Cc: ak@...ux.intel.com, arnd@...db.de, bp@...en8.de,
brijesh.singh@....com, dan.j.williams@...el.com,
dave.hansen@...ux.intel.com, haiyangz@...rosoft.com, hpa@...or.com,
jane.chu@...cle.com, kirill.shutemov@...ux.intel.com,
kys@...rosoft.com, linux-arch@...r.kernel.org,
linux-hyperv@...r.kernel.org, luto@...nel.org, mingo@...hat.com,
peterz@...radead.org, rostedt@...dmis.org,
sathyanarayanan.kuppuswamy@...ux.intel.com, seanjc@...gle.com,
tglx@...utronix.de, tony.luck@...el.com, wei.liu@...nel.org,
x86@...nel.org, mikelley@...rosoft.com,
linux-kernel@...r.kernel.org, Tianyu.Lan@...rosoft.com
Subject: Re: [PATCH v3 2/6] x86/tdx: Support vmalloc() for
tdx_enc_status_changed()
On Mon, Feb 06, 2023 at 11:24:15AM -0800, Dexuan Cui wrote:
> When a TDX guest runs on Hyper-V, the hv_netvsc driver's netvsc_init_buf()
> allocates buffers using vzalloc(), and needs to share the buffers with the
> host OS by calling set_memory_decrypted(), which is not working for
> vmalloc() yet. Add the support by handling the pages one by one.
>
> Signed-off-by: Dexuan Cui <decui@...rosoft.com>
>
> ---
>
> Changes in v2:
> Changed tdx_enc_status_changed() in place.
>
> Hi, Dave, I checked the huge vmalloc mapping code, but still don't know
> how to get the underlying huge page info (if huge page is in use) and
> try to use PG_LEVEL_2M/1G in try_accept_page() for vmalloc: I checked
> is_vm_area_hugepages() and __vfree() -> __vunmap(), and I think the
> underlying page allocation info is internal to the mm code, and there
> is no mm API to for me get the info in tdx_enc_status_changed().
I also don't obvious way to retrieve this info after vmalloc() is
complete. split_page() makes all pages independent.
I think you can try to do this manually: allocate a vmalloc region,
allocate pages manually, and put into the region. This way you always know
page sizes and can optimize conversion to shared memory.
But it is tedious and I'm not sure if it worth the gain.
> Hi, Kirill, the load_unaligned_zeropad() issue is not addressed in
> this patch. The issue looks like a generic issue that also happens to
> AMD SNP vTOM mode and C-bit mode. Will need to figure out how to
> address the issue. If we decide to adjust direct mapping to have the
> shared bit set, it lools like we need to do the below for each
> 'start_va' vmalloc page:
> pa = slow_virt_to_phys(start_va);
> set_memory_decrypted(phys_to_virt(pa), 1); -- this line calls
> tdx_enc_status_changed() the second time for the same age, which is not
> great. It looks like we need to find a way to reuse the cpa_flush()
> related code in __set_memory_enc_pgtable() and make sure we call
> tdx_enc_status_changed() only once for the same page from vmalloc()?
Actually, current code will change direct mapping for you. I just
double-checked: the alias processing in __change_page_attr_set_clr() will
change direct mapping if you call it on vmalloc()ed memory.
Splitting direct mapping is still unfortunate, but well.
>
> Changes in v3:
> No change since v2.
>
> arch/x86/coco/tdx/tdx.c | 69 ++++++++++++++++++++++++++---------------
> 1 file changed, 44 insertions(+), 25 deletions(-)
I don't hate what you did here. But I think the code below is a bit
cleaner.
Any opinions?
static bool tdx_enc_status_changed_phys(phys_addr_t start, phys_addr_t end,
bool enc)
{
if (!tdx_map_gpa(start, end, enc))
return false;
/* private->shared conversion requires only MapGPA call */
if (!enc)
return true;
return try_accept_page(start, end);
}
/*
* Inform the VMM of the guest's intent for this physical page: shared with
* the VMM or private to the guest. The VMM is expected to change its mapping
* of the page in response.
*/
static bool tdx_enc_status_changed(unsigned long start, int numpages, bool enc)
{
unsigned long end = start + numpages * PAGE_SIZE;
if (offset_in_page(start) != 0)
return false;
if (!is_vmalloc_addr((void *)start))
return tdx_enc_status_changed_phys(__pa(start), __pa(end), enc);
while (start < end) {
phys_addr_t start_pa = slow_virt_to_phys((void *)start);
phys_addr_t end_pa = start_pa + PAGE_SIZE;
if (!tdx_enc_status_changed_phys(start_pa, end_pa, enc))
return false;
start += PAGE_SIZE;
}
return true;
}
--
Kiryl Shutsemau / Kirill A. Shutemov
Powered by blists - more mailing lists