[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230105114435.000078e4@gmail.com>
Date: Thu, 5 Jan 2023 11:44:35 +0200
From: Zhi Wang <zhi.wang.linux@...il.com>
To: Dexuan Cui <decui@...rosoft.com>
Cc: ak@...ux.intel.com, arnd@...db.de, bp@...en8.de,
brijesh.singh@....com, dan.j.williams@...el.com,
dave.hansen@...ux.intel.com, haiyangz@...rosoft.com, hpa@...or.com,
jane.chu@...cle.com, kirill.shutemov@...ux.intel.com,
kys@...rosoft.com, linux-arch@...r.kernel.org,
linux-hyperv@...r.kernel.org, luto@...nel.org, mingo@...hat.com,
peterz@...radead.org, rostedt@...dmis.org,
sathyanarayanan.kuppuswamy@...ux.intel.com, seanjc@...gle.com,
tglx@...utronix.de, tony.luck@...el.com, wei.liu@...nel.org,
x86@...nel.org, mikelley@...rosoft.com,
linux-kernel@...r.kernel.org, zhi.a.wang@...el.com
Subject: Re: [PATCH v2 2/6] x86/tdx: Support vmalloc() for
tdx_enc_status_changed()
On Tue, 6 Dec 2022 16:33:21 -0800
Dexuan Cui <decui@...rosoft.com> wrote:
> When a TDX guest runs on Hyper-V, the hv_netvsc driver's
> netvsc_init_buf() allocates buffers using vzalloc(), and needs to share
> the buffers with the host OS by calling set_memory_decrypted(), which is
> not working for vmalloc() yet. Add the support by handling the pages one
> by one.
>
It seems calling set_memory_decrypted() in netvsc_init_buf() is missing in
this patch series. I guess there should be another one extra patch to cover
that.
> Signed-off-by: Dexuan Cui <decui@...rosoft.com>
>
> ---
>
> Changes in v2:
> Changed tdx_enc_status_changed() in place.
>
> Hi, Dave, I checked the huge vmalloc mapping code, but still don't know
> how to get the underlying huge page info (if huge page is in use) and
> try to use PG_LEVEL_2M/1G in try_accept_page() for vmalloc: I checked
> is_vm_area_hugepages() and __vfree() -> __vunmap(), and I think the
> underlying page allocation info is internal to the mm code, and there
> is no mm API to for me get the info in tdx_enc_status_changed().
>
> Hi, Kirill, the load_unaligned_zeropad() issue is not addressed in
> this patch. The issue looks like a generic issue that also happens to
> AMD SNP vTOM mode and C-bit mode. Will need to figure out how to
> address the issue. If we decide to adjust direct mapping to have the
> shared bit set, it lools like we need to do the below for each
> 'start_va' vmalloc page:
> pa = slow_virt_to_phys(start_va);
> set_memory_decrypted(phys_to_virt(pa), 1); -- this line calls
> tdx_enc_status_changed() the second time for the page, which is bad.
> It looks like we need to find a way to reuse the cpa_flush() related
> code in __set_memory_enc_pgtable() and make sure we call
> tdx_enc_status_changed() only once for a vmalloc page?
>
>
> arch/x86/coco/tdx/tdx.c | 69 ++++++++++++++++++++++++++---------------
> 1 file changed, 44 insertions(+), 25 deletions(-)
>
> diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
> index cdeda698d308..795ac56f06b8 100644
> --- a/arch/x86/coco/tdx/tdx.c
> +++ b/arch/x86/coco/tdx/tdx.c
> @@ -5,6 +5,7 @@
> #define pr_fmt(fmt) "tdx: " fmt
>
> #include <linux/cpufeature.h>
> +#include <linux/mm.h>
> #include <asm/coco.h>
> #include <asm/tdx.h>
> #include <asm/vmx.h>
> @@ -693,6 +694,34 @@ static bool try_accept_one(phys_addr_t *start,
> unsigned long len, return true;
> }
>
> +static bool try_accept_page(phys_addr_t start, phys_addr_t end)
> +{
> + /*
> + * For shared->private conversion, accept the page using
> + * TDX_ACCEPT_PAGE TDX module call.
> + */
> + while (start < end) {
> + unsigned long len = end - start;
> +
> + /*
> + * Try larger accepts first. It gives chance to VMM to
> keep
> + * 1G/2M SEPT entries where possible and speeds up
> process by
> + * cutting number of hypercalls (if successful).
> + */
> +
> + if (try_accept_one(&start, len, PG_LEVEL_1G))
> + continue;
> +
> + if (try_accept_one(&start, len, PG_LEVEL_2M))
> + continue;
> +
> + if (!try_accept_one(&start, len, PG_LEVEL_4K))
> + return false;
> + }
> +
> + return true;
> +}
> +
> /*
> * Notify the VMM about page mapping conversion. More info about ABI
> * can be found in TDX Guest-Host-Communication Interface (GHCI),
> @@ -749,37 +778,27 @@ static bool tdx_map_gpa(phys_addr_t start,
> phys_addr_t end, bool enc) */
> static bool tdx_enc_status_changed(unsigned long vaddr, int numpages,
> bool enc) {
> - phys_addr_t start = __pa(vaddr);
> - phys_addr_t end = __pa(vaddr + numpages * PAGE_SIZE);
> + bool is_vmalloc = is_vmalloc_addr((void *)vaddr);
> + unsigned long len = numpages * PAGE_SIZE;
> + void *start_va = (void *)vaddr, *end_va = start_va + len;
> + phys_addr_t start_pa, end_pa;
>
> - if (!tdx_map_gpa(start, end, enc))
> + if (offset_in_page(start_va) != 0)
> return false;
>
> - /* private->shared conversion requires only MapGPA call */
> - if (!enc)
> - return true;
> -
> - /*
> - * For shared->private conversion, accept the page using
> - * TDX_ACCEPT_PAGE TDX module call.
> - */
> - while (start < end) {
> - unsigned long len = end - start;
> -
> - /*
> - * Try larger accepts first. It gives chance to VMM to
> keep
> - * 1G/2M SEPT entries where possible and speeds up
> process by
> - * cutting number of hypercalls (if successful).
> - */
> -
> - if (try_accept_one(&start, len, PG_LEVEL_1G))
> - continue;
> + while (start_va < end_va) {
> + start_pa = is_vmalloc ? slow_virt_to_phys(start_va) :
> + __pa(start_va);
> + end_pa = start_pa + (is_vmalloc ? PAGE_SIZE : len);
>
> - if (try_accept_one(&start, len, PG_LEVEL_2M))
> - continue;
> + if (!tdx_map_gpa(start_pa, end_pa, enc))
> + return false;
>
> - if (!try_accept_one(&start, len, PG_LEVEL_4K))
> + /* private->shared conversion requires only MapGPA call
> */
> + if (enc && !try_accept_page(start_pa, end_pa))
> return false;
> +
> + start_va += is_vmalloc ? PAGE_SIZE : len;
> }
>
> return true;
Powered by blists - more mailing lists