lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 13 May 2021 00:01:01 +0800
From:   Tianyu Lan <ltykernel@...il.com>
To:     Christoph Hellwig <hch@....de>, konrad.wilk@...cle.com
Cc:     kys@...rosoft.com, haiyangz@...rosoft.com, sthemmin@...rosoft.com,
        wei.liu@...nel.org, tglx@...utronix.de, mingo@...hat.com,
        bp@...en8.de, x86@...nel.org, hpa@...or.com, arnd@...db.de,
        akpm@...ux-foundation.org, gregkh@...uxfoundation.org,
        konrad.wilk@...cle.com, m.szyprowski@...sung.com,
        robin.murphy@....com, joro@...tes.org, will@...nel.org,
        davem@...emloft.net, kuba@...nel.org, jejb@...ux.ibm.com,
        martin.petersen@...cle.com, Tianyu Lan <Tianyu.Lan@...rosoft.com>,
        iommu@...ts.linux-foundation.org, linux-arch@...r.kernel.org,
        linux-hyperv@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, linux-scsi@...r.kernel.org,
        netdev@...r.kernel.org, vkuznets@...hat.com,
        thomas.lendacky@....com, brijesh.singh@....com,
        sunilmut@...rosoft.com
Subject: Re: [Resend RFC PATCH V2 10/12] HV/IOMMU: Add Hyper-V dma ops support

Hi Christoph and Konrad:
      Current Swiotlb bounce buffer uses a pool for all devices. There
is a high overhead to get or free bounce buffer during performance test.
Swiotlb code now use a global spin lock to protect bounce buffer data.
Several device queues try to acquire the spin lock and this introduce
additional overhead.

For performance and security perspective, each devices should have a
separate swiotlb bounce buffer pool and so this part needs to rework.
I want to check this is right way to resolve performance issues with 
swiotlb bounce buffer. If you have some other suggestions,welcome.

Thanks.

On 4/14/2021 11:47 PM, Christoph Hellwig wrote:
>> +static dma_addr_t hyperv_map_page(struct device *dev, struct page *page,
>> +				  unsigned long offset, size_t size,
>> +				  enum dma_data_direction dir,
>> +				  unsigned long attrs)
>> +{
>> +	phys_addr_t map, phys = (page_to_pfn(page) << PAGE_SHIFT) + offset;
>> +
>> +	if (!hv_is_isolation_supported())
>> +		return phys;
>> +
>> +	map = swiotlb_tbl_map_single(dev, phys, size, HV_HYP_PAGE_SIZE, dir,
>> +				     attrs);
>> +	if (map == (phys_addr_t)DMA_MAPPING_ERROR)
>> +		return DMA_MAPPING_ERROR;
>> +
>> +	return map;
>> +}
> 
> This largerly duplicates what dma-direct + swiotlb does.  Please use
> force_dma_unencrypted to force bounce buffering and just use the generic
> code.
> 
>> +	if (hv_isolation_type_snp()) {
>> +		ret = hv_set_mem_host_visibility(
>> +				phys_to_virt(hyperv_io_tlb_start),
>> +				hyperv_io_tlb_size,
>> +				VMBUS_PAGE_VISIBLE_READ_WRITE);
>> +		if (ret)
>> +			panic("%s: Fail to mark Hyper-v swiotlb buffer visible to host. err=%d\n",
>> +			      __func__, ret);
>> +
>> +		hyperv_io_tlb_remap = ioremap_cache(hyperv_io_tlb_start
>> +					    + ms_hyperv.shared_gpa_boundary,
>> +						    hyperv_io_tlb_size);
>> +		if (!hyperv_io_tlb_remap)
>> +			panic("%s: Fail to remap io tlb.\n", __func__);
>> +
>> +		memset(hyperv_io_tlb_remap, 0x00, hyperv_io_tlb_size);
>> +		swiotlb_set_bounce_remap(hyperv_io_tlb_remap);
> 
> And this really needs to go into a common hook where we currently just
> call set_memory_decrypted so that all the different schemes for these
> trusted VMs (we have about half a dozen now) can share code rather than
> reinventing it.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ