[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9a5d3809-f1e1-0f4a-8249-9ce1c6df6453@gmail.com>
Date: Mon, 1 Mar 2021 21:43:45 +0800
From: Tianyu Lan <ltykernel@...il.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: kys@...rosoft.com, haiyangz@...rosoft.com, sthemmin@...rosoft.com,
wei.liu@...nel.org, jejb@...ux.ibm.com, martin.petersen@...cle.com,
Tianyu Lan <Tianyu.Lan@...rosoft.com>,
linux-hyperv@...r.kernel.org, linux-scsi@...r.kernel.org,
linux-kernel@...r.kernel.org, vkuznets@...hat.com,
thomas.lendacky@....com, brijesh.singh@....com,
sunilmut@...rosoft.com
Subject: Re: [RFC PATCH 12/12] HV/Storvsc: Add bounce buffer support for
Storvsc
Hi Christoph:
Thanks a lot for your review. There are some reasons.
1) Vmbus drivers don't use DMA API now.
2) Hyper-V Vmbus channel ring buffer already play bounce buffer
role for most vmbus drivers. Just two kinds of packets from
netvsc/storvsc are uncovered.
3) In AMD SEV-SNP based Hyper-V guest, the access physical address
of shared memory should be bounce buffer memory physical address plus
with a shared memory boundary(e.g, 48bit) reported Hyper-V CPUID. It's
called virtual top of memory(vTom) in AMD spec and works as a watermark.
So it needs to ioremap/memremap the associated physical address above
the share memory boundary before accessing them. swiotlb_bounce() uses
low end physical address to access bounce buffer and this doesn't work
in this senario. If something wrong, please help me correct me.
Thanks.
On 3/1/2021 2:54 PM, Christoph Hellwig wrote:
> This should be handled by the DMA mapping layer, just like for native
> SEV support.
>
Powered by blists - more mailing lists