lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160921175855.GG18176@leverpostej>
Date:   Wed, 21 Sep 2016 18:58:55 +0100
From:   Mark Rutland <mark.rutland@....com>
To:     Laura Abbott <labbott@...hat.com>
Cc:     Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will.deacon@....com>,
        Ard Biesheuvel <ard.biesheuvel@...aro.org>,
        Kees Cook <keescook@...omium.org>,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64: Correctly bounds check virt_addr_valid

Hi,

On Wed, Sep 21, 2016 at 10:28:48AM -0700, Laura Abbott wrote:
> virt_addr_valid is supposed to return true if and only if virt_to_page
> returns a valid page structure. The current macro does math on whatever
> address is given and passes that to pfn_valid to verify. vmalloc and
> module addresses can happen to generate a pfn that 'happens' to be
> valid. Fix this by only performing the pfn_valid check on addresses that
> have the potential to be valid.
> 
> Signed-off-by: Laura Abbott <labbott@...hat.com>
> ---
> This caused a bug at least twice in hardened usercopy so it is an
> actual problem.

Are there other potentially-broken users of virt_addr_valid? It's not
clear to me what some drivers are doing with this, and therefore whether
we need to cc stable.

> A further TODO is full DEBUG_VIRTUAL support to
> catch these types of mistakes.
> ---
>  arch/arm64/include/asm/memory.h | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 31b7322..f741e19 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -214,7 +214,7 @@ static inline void *phys_to_virt(phys_addr_t x)
>  
>  #ifndef CONFIG_SPARSEMEM_VMEMMAP
>  #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> -#define virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
> +#define virt_addr_valid(kaddr)	(((u64)kaddr) >= PAGE_OFFSET && pfn_valid(__pa(kaddr) >> PAGE_SHIFT))
>  #else
>  #define __virt_to_pgoff(kaddr)	(((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
>  #define __page_to_voff(kaddr)	(((u64)(page) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
> @@ -222,8 +222,8 @@ static inline void *phys_to_virt(phys_addr_t x)
>  #define page_to_virt(page)	((void *)((__page_to_voff(page)) | PAGE_OFFSET))
>  #define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START))
>  
> -#define virt_addr_valid(kaddr)	pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \
> -					   + PHYS_OFFSET) >> PAGE_SHIFT)
> +#define virt_addr_valid(kaddr)	(((u64)kaddr) >= PAGE_OFFSET && pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \
> +					   + PHYS_OFFSET) >> PAGE_SHIFT))
>  #endif
>  #endif

Given the common sub-expression, perhaps it would be better to leave
these as-is, but prefix them with '_', and after the #endif, have
something like:

#define _virt_addr_is_linear(kaddr)	(((u64)(kaddr)) >= PAGE_OFFSET)
#define virt_addr_valid(kaddr)		(_virt_addr_is_linear(kaddr) && _virt_addr_valid(kaddr))

Otherwise, modulo the parenthesis issue you mentioned, this looks
logically correct to me.

Thanks,
Mark.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ