lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bc09628a-d329-bb95-d737-b7c618860c6a@amd.com>
Date:   Tue, 22 Jan 2019 21:40:16 +0000
From:   "Lendacky, Thomas" <Thomas.Lendacky@....com>
To:     Thiago Jung Bauermann <bauerman@...ux.ibm.com>,
        "x86@...nel.org" <x86@...nel.org>
CC:     "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
        "iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Christoph Hellwig <hch@....de>,
        Marek Szyprowski <m.szyprowski@...sung.com>,
        Robin Murphy <robin.murphy@....com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        "H. Peter Anvin" <hpa@...or.com>, Ram Pai <linuxram@...ibm.com>
Subject: Re: [PATCH 1/2] dma-direct: set_memory_{en,de}crypted() take number
 of pages

On 1/22/19 3:17 PM, Thiago Jung Bauermann wrote:
> From: Ram Pai <linuxram@...ibm.com>
> 
> set_memory_encrypted() and set_memory_decrypted() expect the number of
> PAGE_SIZE pages to encrypt or decrypt. dma_direct_alloc() and
> dma_direct_free() instead pass number of bytes. This encrypts/decrypts a
> huge number of pages resulting in data corruption.

That is not what it is doing. See comments below.

> 
> Fixed it.
> 
> [ bauermann: Slightly reworded commit message and added Fixes: tag. ]
> Fixes: d7b417fa08d1 ("x86/mm: Add DMA support for SEV memory encryption")
> Signed-off-by: Ram Pai <linuxram@...ibm.com>
> Signed-off-by: Thiago Jung Bauermann <bauerman@...ux.ibm.com>
> ---
>  kernel/dma/direct.c | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> Notes:
> 
> 1. This was tested on powerpc with patches adding support for running
>    under the ultravisor, which are not yet upstream.
> 
> 2. The lines changed in this patch were added by commit c10f07aa27da
>    ("dma/direct: Handle force decryption for DMA coherent buffers in
>    common code"), but it only moves the code from an x86-specific file.
>    Therefore the Fixes tag references the commit that first introduced
>    the code.
> 
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 355d16acee6d..bc78c37220ba 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -166,7 +166,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
>  
>  	ret = page_address(page);
>  	if (force_dma_unencrypted()) {
> -		set_memory_decrypted((unsigned long)ret, 1 << get_order(size));
> +		set_memory_decrypted((unsigned long)ret, 1);

The get_order() function will return the order for the specified size. To
then get the number of pages you perform the shift as is being done. The
change is definitely wrong since you are now hardcoding the page count to
1. The call to __dma_direct_alloc_pages() will allocate more than one page
if the size is greater than a page.

>  		*dma_handle = __phys_to_dma(dev, page_to_phys(page));
>  	} else {
>  		*dma_handle = phys_to_dma(dev, page_to_phys(page));
> @@ -186,10 +186,8 @@ void __dma_direct_free_pages(struct device *dev, size_t size, struct page *page)
>  void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr,
>  		dma_addr_t dma_addr, unsigned long attrs)
>  {
> -	unsigned int page_order = get_order(size);
> -
>  	if (force_dma_unencrypted())
> -		set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order);
> +		set_memory_encrypted((unsigned long)cpu_addr, 1);

Same comment here as above.

Thanks,
Tom

>  	__dma_direct_free_pages(dev, size, virt_to_page(cpu_addr));
>  }
>  
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ