[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9e4d5de1-37b3-550d-9bca-4eb158e45b33@gmail.com>
Date: Fri, 20 Aug 2021 21:11:15 +0800
From: Tianyu Lan <ltykernel@...il.com>
To: "hch@....de" <hch@....de>, Michael Kelley <mikelley@...rosoft.com>
Cc: KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
"wei.liu@...nel.org" <wei.liu@...nel.org>,
Dexuan Cui <decui@...rosoft.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"bp@...en8.de" <bp@...en8.de>, "x86@...nel.org" <x86@...nel.org>,
"hpa@...or.com" <hpa@...or.com>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"luto@...nel.org" <luto@...nel.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
"boris.ostrovsky@...cle.com" <boris.ostrovsky@...cle.com>,
"jgross@...e.com" <jgross@...e.com>,
"sstabellini@...nel.org" <sstabellini@...nel.org>,
"joro@...tes.org" <joro@...tes.org>,
"will@...nel.org" <will@...nel.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"kuba@...nel.org" <kuba@...nel.org>,
"jejb@...ux.ibm.com" <jejb@...ux.ibm.com>,
"martin.petersen@...cle.com" <martin.petersen@...cle.com>,
"arnd@...db.de" <arnd@...db.de>,
"m.szyprowski@...sung.com" <m.szyprowski@...sung.com>,
"robin.murphy@....com" <robin.murphy@....com>,
"thomas.lendacky@....com" <thomas.lendacky@....com>,
"brijesh.singh@....com" <brijesh.singh@....com>,
"ardb@...nel.org" <ardb@...nel.org>,
Tianyu Lan <Tianyu.Lan@...rosoft.com>,
"pgonda@...gle.com" <pgonda@...gle.com>,
"martin.b.radev@...il.com" <martin.b.radev@...il.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
"rppt@...nel.org" <rppt@...nel.org>,
"sfr@...b.auug.org.au" <sfr@...b.auug.org.au>,
"saravanand@...com" <saravanand@...com>,
"krish.sadhukhan@...cle.com" <krish.sadhukhan@...cle.com>,
"aneesh.kumar@...ux.ibm.com" <aneesh.kumar@...ux.ibm.com>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
"rientjes@...gle.com" <rientjes@...gle.com>,
"hannes@...xchg.org" <hannes@...xchg.org>,
"tj@...nel.org" <tj@...nel.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
vkuznets <vkuznets@...hat.com>,
"parri.andrea@...il.com" <parri.andrea@...il.com>,
"dave.hansen@...el.com" <dave.hansen@...el.com>
Subject: Re: [PATCH V3 12/13] HV/Netvsc: Add Isolation VM support for netvsc
driver
On 8/20/2021 12:21 PM, hch@....de wrote:
> On Thu, Aug 19, 2021 at 06:14:51PM +0000, Michael Kelley wrote:
>>> + if (!pfns)
>>> + return NULL;
>>> +
>>> + for (i = 0; i < size / HV_HYP_PAGE_SIZE; i++)
>>> + pfns[i] = virt_to_hvpfn(buf + i * HV_HYP_PAGE_SIZE)
>>> + + (ms_hyperv.shared_gpa_boundary >> HV_HYP_PAGE_SHIFT);
>>> +
>>> + vaddr = vmap_pfn(pfns, size / HV_HYP_PAGE_SIZE, PAGE_KERNEL_IO);
>>> + kfree(pfns);
>>> +
>>> + return vaddr;
>>> +}
>>
>> This function appears to be a duplicate of hv_map_memory() in Patch 11 of this
>> series. Is it possible to structure things so there is only one implementation? In
>
> So right now it it identical, but there is an important difference:
> the swiotlb memory is physically contiguous to start with, so we can
> do the simple remap using vmap_range as suggested in the last mail.
> The cases here are pretty weird in that netvsc_remap_buf is called right
> after vzalloc. That is we create _two_ mappings in vmalloc space right
> after another, where the original one is just used for establishing the
> "GPADL handle" and freeing the memory. In other words, the obvious thing
> to do here would be to use a vmalloc variant that allows to take the
> shared_gpa_boundary into account when setting up the PTEs.
The buffer is allocated via vmalloc(). It needs to be marked as host
visible via hyperv hvcall before being accessed via address space above
shared_gpa_boundary. The hvcall is called in the vmbus_establish_gpadl().
>
> And here is somthing I need help from the x86 experts: does the CPU
> actually care about this shared_gpa_boundary? Or does it just matter
> for the generated DMA address? Does somehow have a good pointer to
> how this mechanism works?
>
The shared_gpa_boundary is vTOM feature of AMD SEV-SNP. Tom Lendacky
introduced the feature in previous mail. I copy it here and please have
a look.
From Tom Lendacky:
IIUC, this is using the vTOM feature of SEV-SNP. When this feature is
enabled for a VMPL level, any physical memory addresses below vTOM are
considered private/encrypted and any physical memory addresses above vTOM
are considered shared/unencrypted. With this option, you don't need a
fully enlightened guest that sets and clears page table encryption bits.
You just need the DMA buffers to be allocated in the proper range above
vTOM.
See the section on "Virtual Machine Privilege Levels" in
https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf.
Powered by blists - more mailing lists