lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d5ebd13d-1278-8714-3f03-8ee7f04a2b38@amd.com>
Date:   Tue, 15 Nov 2016 11:02:20 -0600
From:   Tom Lendacky <thomas.lendacky@....com>
To:     Radim Krčmář <rkrcmar@...hat.com>
CC:     <linux-arch@...r.kernel.org>, <linux-efi@...r.kernel.org>,
        <kvm@...r.kernel.org>, <linux-doc@...r.kernel.org>,
        <x86@...nel.org>, <linux-kernel@...r.kernel.org>,
        <kasan-dev@...glegroups.com>, <linux-mm@...ck.org>,
        <iommu@...ts.linux-foundation.org>, Rik van Riel <riel@...hat.com>,
        Arnd Bergmann <arnd@...db.de>,
        Jonathan Corbet <corbet@....net>,
        Matt Fleming <matt@...eblueprint.co.uk>,
        Joerg Roedel <joro@...tes.org>,
        Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Larry Woodman <lwoodman@...hat.com>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Andy Lutomirski <luto@...nel.org>,
        "H. Peter Anvin" <hpa@...or.com>,
        Andrey Ryabinin <aryabinin@...tuozzo.com>,
        Alexander Potapenko <glider@...gle.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Dmitry Vyukov <dvyukov@...gle.com>
Subject: Re: [RFC PATCH v3 13/20] x86: DMA support for memory encryption

On 11/15/2016 8:39 AM, Radim Krčmář wrote:
> 2016-11-09 18:37-0600, Tom Lendacky:
>> Since DMA addresses will effectively look like 48-bit addresses when the
>> memory encryption mask is set, SWIOTLB is needed if the DMA mask of the
>> device performing the DMA does not support 48-bits. SWIOTLB will be
>> initialized to create un-encrypted bounce buffers for use by these devices.
>>
>> Signed-off-by: Tom Lendacky <thomas.lendacky@....com>
>> ---
>> diff --git a/arch/x86/kernel/pci-nommu.c b/arch/x86/kernel/pci-nommu.c
>> @@ -30,7 +30,7 @@ static dma_addr_t nommu_map_page(struct device *dev, struct page *page,
>>  				 enum dma_data_direction dir,
>>  				 unsigned long attrs)
>>  {
>> -	dma_addr_t bus = page_to_phys(page) + offset;
>> +	dma_addr_t bus = phys_to_dma(dev, page_to_phys(page)) + offset;
>>  	WARN_ON(size == 0);
>>  	if (!check_addr("map_single", dev, bus, size))
>>  		return DMA_ERROR_CODE;
>> diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
>> @@ -12,6 +12,8 @@
>>  int swiotlb __read_mostly;
>>  
>>  void *x86_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
>> @@ -64,13 +66,15 @@ static struct dma_map_ops swiotlb_dma_ops = {
>>   * pci_swiotlb_detect_override - set swiotlb to 1 if necessary
>>   *
>>   * This returns non-zero if we are forced to use swiotlb (by the boot
>> - * option).
>> + * option). If memory encryption is enabled then swiotlb will be set
>> + * to 1 so that bounce buffers are allocated and used for devices that
>> + * do not support the addressing range required for the encryption mask.
>>   */
>>  int __init pci_swiotlb_detect_override(void)
>>  {
>>  	int use_swiotlb = swiotlb | swiotlb_force;
>>  
>> -	if (swiotlb_force)
>> +	if (swiotlb_force || sme_me_mask)
>>  		swiotlb = 1;
>>  
>>  	return use_swiotlb;
> 
> We want to return 1 even if only sme_me_mask is 1, because the return
> value is used for detection.  The following would be less obscure, IMO:
> 
> 	if (swiotlb_force || sme_me_mask)
> 		swiotlb = 1;
> 
> 	return swiotlb;

If we do that then all DMA would go through the swiotlb bounce buffers.
By setting swiotlb to 1 we indicate that the bounce buffers will be
needed for those devices that can't support the addressing range when
the encryption bit is set (48 bit DMA). But if the device can support
the addressing range we won't use the bounce buffers.

> 
>> diff --git a/init/main.c b/init/main.c
>> @@ -598,6 +602,15 @@ asmlinkage __visible void __init start_kernel(void)
>>  	 */
>>  	locking_selftest();
>>  
>> +	/*
>> +	 * This needs to be called before any devices perform DMA
>> +	 * operations that might use the swiotlb bounce buffers.
>> +	 * This call will mark the bounce buffers as un-encrypted so
>> +	 * that their usage will not cause "plain-text" data to be
>> +	 * decrypted when accessed.
>> +	 */
>> +	mem_encrypt_init();
> 
> (Comments below are connected to the reason why we call this.)
> 
>> diff --git a/lib/swiotlb.c b/lib/swiotlb.c
>> @@ -159,6 +171,31 @@ void swiotlb_print_info(void)
>> +/*
>> + * If memory encryption is active, the DMA address for an encrypted page may
>> + * be beyond the range of the device. If bounce buffers are required be sure
>> + * that they are not on an encrypted page. This should be called before the
>> + * iotlb area is used.
>> + */
>> +void __init swiotlb_clear_encryption(void)
>> +{
>> +	void *vaddr;
>> +	unsigned long bytes;
>> +
>> +	if (no_iotlb_memory || !io_tlb_start || late_alloc)
> 
> io_tlb_start seems redundant -- when can !no_iotlb_memory &&
> !io_tlb_start happen?

Yes, the io_tlb_start check can be removed.

> 
> Is the order of calls
>   1) swiotlb init
>   2) SME init
>   3) swiotlb late init 
> ?

Yes, sort of.  The swiotlb late init may not be called.

> 
> We setup encrypted swiotlb and then decrypt it, but sometimes set it up
> decrypted (late_alloc) ... why isn't the swiotlb set up decrypted
> directly?

When swiotlb is allocated in swiotlb_init(), it is too early to make
use of the api to the change the page attributes. Because of this,
the callback to make those changes is needed.

> 
>> +		return;
>> +
>> +	vaddr = phys_to_virt(io_tlb_start);
>> +	bytes = PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT);
>> +	swiotlb_set_mem_unenc(vaddr, bytes);
>> +	memset(vaddr, 0, bytes);
>> +
>> +	vaddr = phys_to_virt(io_tlb_overflow_buffer);
>> +	bytes = PAGE_ALIGN(io_tlb_overflow);
>> +	swiotlb_set_mem_unenc(vaddr, bytes);
>> +	memset(vaddr, 0, bytes);
>> +}
>> +
>> @@ -541,7 +583,7 @@ static phys_addr_t
>>  map_single(struct device *hwdev, phys_addr_t phys, size_t size,
>>  	   enum dma_data_direction dir)
>>  {
>> -	dma_addr_t start_dma_addr = phys_to_dma(hwdev, io_tlb_start);
>> +	dma_addr_t start_dma_addr = swiotlb_phys_to_dma(hwdev, io_tlb_start);
> 
> We have decrypted io_tlb_start before, so shouldn't its physical address
> be saved without the sme bit?  (Which changes a lot ...)

I'm not sure what you mean here, can you elaborate a bit more?

Thanks,
Tom

> 
> Thanks.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ