[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <DM6PR21MB12926C3BC4766C78C57D9210CA629@DM6PR21MB1292.namprd21.prod.outlook.com>
Date: Thu, 25 Nov 2021 21:58:16 +0000
From: Haiyang Zhang <haiyangz@...rosoft.com>
To: "Michael Kelley (LINUX)" <mikelley@...rosoft.com>,
Tianyu Lan <ltykernel@...il.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"bp@...en8.de" <bp@...en8.de>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"x86@...nel.org" <x86@...nel.org>, "hpa@...or.com" <hpa@...or.com>,
"luto@...nel.org" <luto@...nel.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"jgross@...e.com" <jgross@...e.com>,
"sstabellini@...nel.org" <sstabellini@...nel.org>,
"boris.ostrovsky@...cle.com" <boris.ostrovsky@...cle.com>,
KY Srinivasan <kys@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
"wei.liu@...nel.org" <wei.liu@...nel.org>,
Dexuan Cui <decui@...rosoft.com>,
"joro@...tes.org" <joro@...tes.org>,
"will@...nel.org" <will@...nel.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"kuba@...nel.org" <kuba@...nel.org>,
"jejb@...ux.ibm.com" <jejb@...ux.ibm.com>,
"martin.petersen@...cle.com" <martin.petersen@...cle.com>,
"hch@....de" <hch@....de>,
"m.szyprowski@...sung.com" <m.szyprowski@...sung.com>,
"robin.murphy@....com" <robin.murphy@....com>,
Tianyu Lan <Tianyu.Lan@...rosoft.com>,
"thomas.lendacky@....com" <thomas.lendacky@....com>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>
CC: "iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
vkuznets <vkuznets@...hat.com>,
"brijesh.singh@....com" <brijesh.singh@....com>,
"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
"parri.andrea@...il.com" <parri.andrea@...il.com>,
"dave.hansen@...el.com" <dave.hansen@...el.com>
Subject: RE: [PATCH V2 5/6] net: netvsc: Add Isolation VM support for netvsc
driver
> -----Original Message-----
> From: Michael Kelley (LINUX) <mikelley@...rosoft.com>
> Sent: Wednesday, November 24, 2021 12:03 PM
> To: Tianyu Lan <ltykernel@...il.com>; tglx@...utronix.de; mingo@...hat.com; bp@...en8.de;
> dave.hansen@...ux.intel.com; x86@...nel.org; hpa@...or.com; luto@...nel.org;
> peterz@...radead.org; jgross@...e.com; sstabellini@...nel.org; boris.ostrovsky@...cle.com;
> KY Srinivasan <kys@...rosoft.com>; Haiyang Zhang <haiyangz@...rosoft.com>; Stephen
> Hemminger <sthemmin@...rosoft.com>; wei.liu@...nel.org; Dexuan Cui <decui@...rosoft.com>;
> joro@...tes.org; will@...nel.org; davem@...emloft.net; kuba@...nel.org; jejb@...ux.ibm.com;
> martin.petersen@...cle.com; hch@....de; m.szyprowski@...sung.com; robin.murphy@....com;
> Tianyu Lan <Tianyu.Lan@...rosoft.com>; thomas.lendacky@....com; xen-
> devel@...ts.xenproject.org
> Cc: iommu@...ts.linux-foundation.org; linux-hyperv@...r.kernel.org; linux-
> kernel@...r.kernel.org; linux-scsi@...r.kernel.org; netdev@...r.kernel.org; vkuznets
> <vkuznets@...hat.com>; brijesh.singh@....com; konrad.wilk@...cle.com;
> parri.andrea@...il.com; dave.hansen@...el.com
> Subject: RE: [PATCH V2 5/6] net: netvsc: Add Isolation VM support for netvsc driver
>
> From: Tianyu Lan <ltykernel@...il.com> Sent: Tuesday, November 23, 2021 6:31 AM
> >
> > In Isolation VM, all shared memory with host needs to mark visible to
> > host via hvcall. vmbus_establish_gpadl() has already done it for
> > netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> > pagebuffer() stills need to be handled. Use DMA API to map/umap these
> > memory during sending/receiving packet and Hyper-V swiotlb bounce
> > buffer dma address will be returned. The swiotlb bounce buffer has
> > been masked to be visible to host during boot up.
> >
> > Allocate rx/tx ring buffer via dma_alloc_noncontiguous() in Isolation
> > VM. After calling vmbus_establish_gpadl() which marks these pages
> > visible to host, map these pages unencrypted addes space via dma_vmap_noncontiguous().
> >
>
> The big unresolved topic is how best to do the allocation and mapping of the big netvsc
> send and receive buffers. Let me summarize and make a recommendation.
>
> Background
> ==========
> 1. Each Hyper-V synthetic network device requires a large pre-allocated receive
> buffer (defaults to 16 Mbytes) and a similar send buffer (defaults to 1 Mbyte).
> 2. The buffers are allocated in guest memory and shared with the Hyper-V host.
> As such, in the Hyper-V SNP environment, the memory must be unencrypted
> and accessed in the Hyper-V guest with shared_gpa_boundary (i.e., VTOM)
> added to the physical memory address.
> 3. The buffers need *not* be contiguous in guest physical memory, but must be
> contiguously mapped in guest kernel virtual space.
> 4. Network devices may come and go during the life of the VM, so allocation of
> these buffers and their mappings may be done after Linux has been running for
> a long time.
> 5. Performance of the allocation and mapping process is not an issue since it is
> done only on synthetic network device add/remove.
> 6. So the primary goals are an appropriate logical abstraction, code that is
> simple and straightforward, and efficient memory usage.
>
> Approaches
> ==========
> During the development of these patches, four approaches have been
> implemented:
>
> 1. Two virtual mappings: One from vmalloc() to allocate the guest memory, and
> the second from vmap_pfns() after adding the shared_gpa_boundary. This is
> implemented in Hyper-V or netvsc specific code, with no use of DMA APIs.
> No separate list of physical pages is maintained, so for creating the second
> mapping, the PFN list is assembled temporarily by doing virt-to-phys()
> page-by-page on the vmalloc mapping, and then discarded because it is no
> longer needed. [v4 of the original patch series.]
>
> 2. Two virtual mappings as in (1) above, but implemented via new DMA calls
> dma_map_decrypted() and dma_unmap_encrypted(). [v3 of the original
> patch series.]
>
> 3. Two virtual mappings as in (1) above, but implemented via DMA noncontiguous
> allocation and mapping calls, as enhanced to allow for custom map/unmap
> implementations. A list of physical pages is maintained in the dma_sgt_handle
> as expected by the DMA noncontiguous API. [New split-off patch series v1 & v2]
>
> 4. Single virtual mapping from vmap_pfns(). The netvsc driver allocates physical
> memory via alloc_pages() with as much contiguity as possible, and maintains a
> list of physical pages and ranges. Single virtual map is setup with vmap_pfns()
> after adding shared_gpa_boundary. [v5 of the original patch series.]
>
> Both implementations using DMA APIs use very little of the existing DMA machinery. Both
> require extensions to the DMA APIs, and custom ops functions.
> While in some sense the netvsc send and receive buffers involve DMA, they do not require
> any DMA actions on a per-I/O basis. It seems better to me to not try to fit these two
> buffers into the DMA model as a one-off. Let's just use Hyper-V specific code to allocate
> and map them, as is done with the Hyper-V VMbus channel ring buffers.
>
> That leaves approaches (1) and (4) above. Between those two, (1) is simpler even though
> there are two virtual mappings. Using alloc_pages() as in (4) is messy and there's no
> real benefit to using higher order allocations.
> (4) also requires maintaining a separate list of PFNs and ranges, which offsets some of
> the benefits to having only one virtual mapping active at any point in time.
>
> I don't think there's a clear "right" answer, so it's a judgment call. We've explored
> what other approaches would look like, and I'd say let's go with
> (1) as the simpler approach. Thoughts?
>
I agree with the following goal:
"So the primary goals are an appropriate logical abstraction, code that is
simple and straightforward, and efficient memory usage."
And the Approach #1 looks better to me as well.
Thanks,
- Haiyang
Powered by blists - more mailing lists