[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210820042151.GB26450@lst.de>
Date: Fri, 20 Aug 2021 06:21:51 +0200
From: "hch@....de" <hch@....de>
To: Michael Kelley <mikelley@...rosoft.com>
Cc: Tianyu Lan <ltykernel@...il.com>,
KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
"wei.liu@...nel.org" <wei.liu@...nel.org>,
Dexuan Cui <decui@...rosoft.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"bp@...en8.de" <bp@...en8.de>, "x86@...nel.org" <x86@...nel.org>,
"hpa@...or.com" <hpa@...or.com>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"luto@...nel.org" <luto@...nel.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
"boris.ostrovsky@...cle.com" <boris.ostrovsky@...cle.com>,
"jgross@...e.com" <jgross@...e.com>,
"sstabellini@...nel.org" <sstabellini@...nel.org>,
"joro@...tes.org" <joro@...tes.org>,
"will@...nel.org" <will@...nel.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"kuba@...nel.org" <kuba@...nel.org>,
"jejb@...ux.ibm.com" <jejb@...ux.ibm.com>,
"martin.petersen@...cle.com" <martin.petersen@...cle.com>,
"arnd@...db.de" <arnd@...db.de>, "hch@....de" <hch@....de>,
"m.szyprowski@...sung.com" <m.szyprowski@...sung.com>,
"robin.murphy@....com" <robin.murphy@....com>,
"thomas.lendacky@....com" <thomas.lendacky@....com>,
"brijesh.singh@....com" <brijesh.singh@....com>,
"ardb@...nel.org" <ardb@...nel.org>,
Tianyu Lan <Tianyu.Lan@...rosoft.com>,
"pgonda@...gle.com" <pgonda@...gle.com>,
"martin.b.radev@...il.com" <martin.b.radev@...il.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
"rppt@...nel.org" <rppt@...nel.org>,
"sfr@...b.auug.org.au" <sfr@...b.auug.org.au>,
"saravanand@...com" <saravanand@...com>,
"krish.sadhukhan@...cle.com" <krish.sadhukhan@...cle.com>,
"aneesh.kumar@...ux.ibm.com" <aneesh.kumar@...ux.ibm.com>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
"rientjes@...gle.com" <rientjes@...gle.com>,
"hannes@...xchg.org" <hannes@...xchg.org>,
"tj@...nel.org" <tj@...nel.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
vkuznets <vkuznets@...hat.com>,
"parri.andrea@...il.com" <parri.andrea@...il.com>,
"dave.hansen@...el.com" <dave.hansen@...el.com>
Subject: Re: [PATCH V3 12/13] HV/Netvsc: Add Isolation VM support for
netvsc driver
On Thu, Aug 19, 2021 at 06:14:51PM +0000, Michael Kelley wrote:
> > + if (!pfns)
> > + return NULL;
> > +
> > + for (i = 0; i < size / HV_HYP_PAGE_SIZE; i++)
> > + pfns[i] = virt_to_hvpfn(buf + i * HV_HYP_PAGE_SIZE)
> > + + (ms_hyperv.shared_gpa_boundary >> HV_HYP_PAGE_SHIFT);
> > +
> > + vaddr = vmap_pfn(pfns, size / HV_HYP_PAGE_SIZE, PAGE_KERNEL_IO);
> > + kfree(pfns);
> > +
> > + return vaddr;
> > +}
>
> This function appears to be a duplicate of hv_map_memory() in Patch 11 of this
> series. Is it possible to structure things so there is only one implementation? In
So right now it it identical, but there is an important difference:
the swiotlb memory is physically contiguous to start with, so we can
do the simple remap using vmap_range as suggested in the last mail.
The cases here are pretty weird in that netvsc_remap_buf is called right
after vzalloc. That is we create _two_ mappings in vmalloc space right
after another, where the original one is just used for establishing the
"GPADL handle" and freeing the memory. In other words, the obvious thing
to do here would be to use a vmalloc variant that allows to take the
shared_gpa_boundary into account when setting up the PTEs.
And here is somthing I need help from the x86 experts: does the CPU
actually care about this shared_gpa_boundary? Or does it just matter
for the generated DMA address? Does somehow have a good pointer to
how this mechanism works?
Powered by blists - more mailing lists