[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7d86307f-83ff-03ad-c6e9-87b455c559b8@gmail.com>
Date: Tue, 15 Jun 2021 22:31:31 +0800
From: Tianyu Lan <ltykernel@...il.com>
To: Christoph Hellwig <hch@....de>
Cc: kys@...rosoft.com, haiyangz@...rosoft.com, sthemmin@...rosoft.com,
wei.liu@...nel.org, decui@...rosoft.com, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, x86@...nel.org, hpa@...or.com,
arnd@...db.de, dave.hansen@...ux.intel.com, luto@...nel.org,
peterz@...radead.org, akpm@...ux-foundation.org,
kirill.shutemov@...ux.intel.com, rppt@...nel.org,
hannes@...xchg.org, cai@....pw, krish.sadhukhan@...cle.com,
saravanand@...com, Tianyu.Lan@...rosoft.com,
konrad.wilk@...cle.com, m.szyprowski@...sung.com,
robin.murphy@....com, boris.ostrovsky@...cle.com, jgross@...e.com,
sstabellini@...nel.org, joro@...tes.org, will@...nel.org,
xen-devel@...ts.xenproject.org, davem@...emloft.net,
kuba@...nel.org, jejb@...ux.ibm.com, martin.petersen@...cle.com,
iommu@...ts.linux-foundation.org, linux-arch@...r.kernel.org,
linux-hyperv@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-scsi@...r.kernel.org, netdev@...r.kernel.org,
vkuznets@...hat.com, thomas.lendacky@....com,
brijesh.singh@....com, sunilmut@...rosoft.com
Subject: Re: [RFC PATCH V3 10/11] HV/Netvsc: Add Isolation VM support for
netvsc driver
On 6/14/2021 11:33 PM, Christoph Hellwig wrote:
> On Mon, Jun 14, 2021 at 10:04:06PM +0800, Tianyu Lan wrote:
>> The pages in the hv_page_buffer array here are in the kernel linear
>> mapping. The packet sent to host will contain an array which contains
>> transaction data. In the isolation VM, data in the these pages needs to be
>> copied to bounce buffer and so call dma_map_single() here to map these data
>> pages with bounce buffer. The vmbus has ring buffer where the send/receive
>> packets are copied to/from. The ring buffer has been remapped to the extra
>> space above shared gpa boundary/vTom during probing Netvsc driver and so
>> not call dma map function for vmbus ring
>> buffer.
>
> So why do we have all that PFN magic instead of using struct page or
> the usual kernel I/O buffers that contain a page pointer?
>
These PFNs originally is part of Hyper-V protocol data and will be sent
to host. Host accepts these GFN and copy data from/to guest memory. The
translation from va to pa is done by caller that populates the
hv_page_buffer array. I will try calling dma map function before
populating struct hv_page_buffer and this can avoid redundant
translation between PA and VA.
Powered by blists - more mailing lists