[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8f1a285d-4b67-8041-d326-af565b2756c0@gmail.com>
Date: Wed, 21 Jul 2021 18:28:48 +0800
From: Tianyu Lan <ltykernel@...il.com>
To: Christoph Hellwig <hch@....de>
Cc: kys@...rosoft.com, haiyangz@...rosoft.com, sthemmin@...rosoft.com,
wei.liu@...nel.org, decui@...rosoft.com, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, x86@...nel.org, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
konrad.wilk@...cle.com, boris.ostrovsky@...cle.com,
jgross@...e.com, sstabellini@...nel.org, joro@...tes.org,
will@...nel.org, davem@...emloft.net, kuba@...nel.org,
jejb@...ux.ibm.com, martin.petersen@...cle.com, arnd@...db.de,
m.szyprowski@...sung.com, robin.murphy@....com,
kirill.shutemov@...ux.intel.com, akpm@...ux-foundation.org,
rppt@...nel.org, Tianyu.Lan@...rosoft.com, thomas.lendacky@....com,
ardb@...nel.org, robh@...nel.org, nramas@...ux.microsoft.com,
pgonda@...gle.com, martin.b.radev@...il.com, david@...hat.com,
krish.sadhukhan@...cle.com, saravanand@...com,
xen-devel@...ts.xenproject.org, keescook@...omium.org,
rientjes@...gle.com, hannes@...xchg.org,
michael.h.kelley@...rosoft.com, iommu@...ts.linux-foundation.org,
linux-arch@...r.kernel.org, linux-hyperv@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-scsi@...r.kernel.org,
netdev@...r.kernel.org, vkuznets@...hat.com, brijesh.singh@....com,
anparri@...rosoft.com
Subject: Re: [Resend RFC PATCH V4 09/13] x86/Swiotlb/HV: Add Swiotlb bounce
buffer remap function for HV IVM
Thanks for review.
On 7/20/2021 9:54 PM, Christoph Hellwig wrote:
>
> Please split the swiotlb changes into a separate patch from the
> consumer.
OK. Will update.
>
>> }
>> +
>> +/*
>> + * hv_map_memory - map memory to extra space in the AMD SEV-SNP Isolation VM.
>> + */
>> +unsigned long hv_map_memory(unsigned long addr, unsigned long size)
>> +{
>> + unsigned long *pfns = kcalloc(size / HV_HYP_PAGE_SIZE,
>> + sizeof(unsigned long),
>> + GFP_KERNEL);
>> + unsigned long vaddr;
>> + int i;
>> +
>> + if (!pfns)
>> + return (unsigned long)NULL;
>> +
>> + for (i = 0; i < size / HV_HYP_PAGE_SIZE; i++)
>> + pfns[i] = virt_to_hvpfn((void *)addr + i * HV_HYP_PAGE_SIZE) +
>> + (ms_hyperv.shared_gpa_boundary >> HV_HYP_PAGE_SHIFT);
>> +
>> + vaddr = (unsigned long)vmap_pfn(pfns, size / HV_HYP_PAGE_SIZE,
>> + PAGE_KERNEL_IO);
>> + kfree(pfns);
>> +
>> + return vaddr;
>
> This seems to miss a 'select VMAP_PFN'.
VMAP_PFN has been selected in the previous patch "RFC PATCH V4 08/13]
HV/Vmbus: Initialize VMbus ring buffer for Isolation VM"
> But more importantly I don't
> think this actually works. Various DMA APIs do expect a struct page
> backing, so how is this going to work with say dma_mmap_attrs or
> dma_get_sgtable_attrs?
dma_mmap_attrs() and dma_get_sgtable_attrs() get input virtual address
belonging to backing memory with struct page and returns bounce buffer
dma physical address which is below shared_gpa_boundary(vTOM) and passed
to Hyper-V via vmbus protocol.
The new map virtual address is only to access bounce buffer in swiotlb
code and will not be used other places. It's stored in the mem->vstart.
So the new API set_memory_decrypted_map() in this series is only called
in the swiotlb code. Other platforms may replace set_memory_decrypted()
with set_memory_decrypted_map() as requested.
>
>> +static unsigned long __map_memory(unsigned long addr, unsigned long size)
>> +{
>> + if (hv_is_isolation_supported())
>> + return hv_map_memory(addr, size);
>> +
>> + return addr;
>> +}
>> +
>> +static void __unmap_memory(unsigned long addr)
>> +{
>> + if (hv_is_isolation_supported())
>> + hv_unmap_memory(addr);
>> +}
>> +
>> +unsigned long set_memory_decrypted_map(unsigned long addr, unsigned long size)
>> +{
>> + if (__set_memory_enc_dec(addr, size / PAGE_SIZE, false))
>> + return (unsigned long)NULL;
>> +
>> + return __map_memory(addr, size);
>> +}
>> +
>> +int set_memory_encrypted_unmap(unsigned long addr, unsigned long size)
>> +{
>> + __unmap_memory(addr);
>> + return __set_memory_enc_dec(addr, size / PAGE_SIZE, true);
>> +}
>
> Why this obsfucation into all kinds of strange helpers? Also I think
> we want an ops vectors (or alternative calls) instead of the random
> if checks here.
Yes, agree and will add ops for different platforms to map/unmap memory.
>
>> + * @vstart: The virtual start address of the swiotlb memory pool. The swiotlb
>> + * memory pool may be remapped in the memory encrypted case and store
>
> Normall we'd call this vaddr or cpu_addr.
OK. Will update.
>
>> - set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
>> - memset(vaddr, 0, bytes);
>> + mem->vstart = (void *)set_memory_decrypted_map((unsigned long)vaddr, bytes);
>
> Please always pass kernel virtual addresses as pointers.
>
> And I think these APIs might need better names, e.g.
>
> arch_dma_map_decrypted and arch_dma_unmap_decrypted.
>
> Also these will need fallback versions for non-x86 architectures that
> currently use memory encryption.
Sure. Will update in the next version.
Powered by blists - more mailing lists