[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SA1PR21MB133576523E55BBC7300DE2B1BFFA9@SA1PR21MB1335.namprd21.prod.outlook.com>
Date: Thu, 5 Jan 2023 20:29:25 +0000
From: Dexuan Cui <decui@...rosoft.com>
To: Zhi Wang <zhi.wang.linux@...il.com>
CC: "ak@...ux.intel.com" <ak@...ux.intel.com>,
"arnd@...db.de" <arnd@...db.de>, "bp@...en8.de" <bp@...en8.de>,
"brijesh.singh@....com" <brijesh.singh@....com>,
"dan.j.williams@...el.com" <dan.j.williams@...el.com>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
"hpa@...or.com" <hpa@...or.com>,
"jane.chu@...cle.com" <jane.chu@...cle.com>,
"kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
KY Srinivasan <kys@...rosoft.com>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"luto@...nel.org" <luto@...nel.org>,
"mingo@...hat.com" <mingo@...hat.com>,
"peterz@...radead.org" <peterz@...radead.org>,
"rostedt@...dmis.org" <rostedt@...dmis.org>,
"sathyanarayanan.kuppuswamy@...ux.intel.com"
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
"seanjc@...gle.com" <seanjc@...gle.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"tony.luck@...el.com" <tony.luck@...el.com>,
"wei.liu@...nel.org" <wei.liu@...nel.org>,
"x86@...nel.org" <x86@...nel.org>,
"Michael Kelley (LINUX)" <mikelley@...rosoft.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"zhi.a.wang@...el.com" <zhi.a.wang@...el.com>
Subject: RE: [PATCH v2 2/6] x86/tdx: Support vmalloc() for
tdx_enc_status_changed()
> From: Zhi Wang <zhi.wang.linux@...il.com>
> Sent: Thursday, January 5, 2023 10:10 AM
> [...]
> I see. Then do we still need the hv_map_memory()in the following
> code piece in netvsc.c after {set_memoery_encrypted, decrypted}()
> supporting memory from vmalloc()?
For SNP, set_memory_decrypted() is already able to support memory
from vmalloc().
For TDX, currently set_memory_decrypted()() is unable to support
memory from vmalloc().
> /* set_memory_decrypted() is called here. */
> ret = vmbus_establish_gpadl(device->channel,
> net_device->recv_buf, buf_size,
> &net_device->recv_buf_gpadl_handle);
> if (ret != 0) {
> netdev_err(ndev,
> "unable to establish receive buffer's gpadl\n");
> goto cleanup;
> }
>
> /* Should we remove this? */
The below block of code is for SNP rather than TDX, so it has nothing to do
with the patch here. BTW, the code is ineeded removed in Michael's patchset,
which is for device assignment support for SNP guests on Hyper-V:
https://lwn.net/ml/linux-kernel/1669951831-4180-11-git-send-email-mikelley@microsoft.com/
and I'm happy with the removal of the code.
> if (hv_isolation_type_snp()) {
> vaddr = hv_map_memory(net_device->recv_buf, buf_size);
> if (!vaddr) {
> ret = -ENOMEM;
> goto cleanup;
> }
>
> net_device->recv_original_buf = net_device->recv_buf;
> net_device->recv_buf = vaddr;
> }
>
> I assume that we need an VA mapped to a shared GPA here.
Yes.
> The VA(net_device->recv_buf) has been associated with a shared GPA in
> set_memory_decrypted() by adjusting the kernel page table.
For a SNP guest with pavavisor on Hyper-V, this is not true in the current
mainline kernel: see set_memory_decrypted() -> __set_memory_enc_dec():
static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
{
//Dexuan: For a SNP guest with paravisor on Hyper-V, currently we
// only call hv_set_mem_host_visibility(), i.e. the page tabe is not
// updated. This is being changed by Michael's patchset, e.g.,
https://lwn.net/ml/linux-kernel/1669951831-4180-7-git-send-email-mikelley@microsoft.com/
if (hv_is_isolation_supported())
return hv_set_mem_host_visibility(addr, numpages, !enc);
if (cc_platform_has(CC_ATTR_MEM_ENCRYPT))
return __set_memory_enc_pgtable(addr, numpages, enc);
return 0;
}
> hv_map_memory()
> is with similar purpose but just a different way:
>
> void *hv_map_memory(void *addr, unsigned long size)
> {
> unsigned long *pfns = kcalloc(size / PAGE_SIZE,
> sizeof(unsigned long),
> GFP_KERNEL);
> void *vaddr;
> int i;
>
> if (!pfns)
> return NULL;
>
> for (i = 0; i < size / PAGE_SIZE; i++)
> pfns[i] = vmalloc_to_pfn(addr + i * PAGE_SIZE) +
> (ms_hyperv.shared_gpa_boundary >>
> PAGE_SHIFT);
>
> vaddr = vmap_pfn(pfns, size / PAGE_SIZE, PAGE_KERNEL_IO);
> kfree(pfns);
>
> return vaddr;
> }
Powered by blists - more mailing lists