lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87eer8fu89.fsf@mpe.ellerman.id.au>
Date:   Mon, 25 May 2020 15:40:06 +1000
From:   Michael Ellerman <mpe@...erman.id.au>
To:     Christophe Leroy <christophe.leroy@...roup.eu>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Paul Mackerras <paulus@...ba.org>
Cc:     linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH v4 14/45] powerpc/32s: Don't warn when mapping RO data ROX.

Christophe Leroy <christophe.leroy@...roup.eu> writes:
> Mapping RO data as ROX is not an issue since that data
> cannot be modified to introduce an exploit.

Being pedantic: it is still an issue, in that it means there's more
targets for a code-reuse attack.

But given the entire kernel text is also available for code-reuse
attacks, the RO data is unlikely to contain any useful sequences that
aren't also in the kernel text.

> PPC64 accepts to have RO data mapped ROX, as a trade off
> between kernel size and strictness of protection.
>
> On PPC32, kernel size is even more critical as amount of
> memory is usually small.

Yep, I think it's a reasonable trade off to make.

cheers

> Depending on the number of available IBATs, the last IBATs
> might overflow the end of text. Only warn if it crosses
> the end of RO data.
>
> Signed-off-by: Christophe Leroy <christophe.leroy@...roup.eu>
> ---
>  arch/powerpc/mm/book3s32/mmu.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
> index 39ba53ca5bb5..a9b2cbc74797 100644
> --- a/arch/powerpc/mm/book3s32/mmu.c
> +++ b/arch/powerpc/mm/book3s32/mmu.c
> @@ -187,6 +187,7 @@ void mmu_mark_initmem_nx(void)
>  	int i;
>  	unsigned long base = (unsigned long)_stext - PAGE_OFFSET;
>  	unsigned long top = (unsigned long)_etext - PAGE_OFFSET;
> +	unsigned long border = (unsigned long)__init_begin - PAGE_OFFSET;
>  	unsigned long size;
>  
>  	if (IS_ENABLED(CONFIG_PPC_BOOK3S_601))
> @@ -201,9 +202,10 @@ void mmu_mark_initmem_nx(void)
>  		size = block_size(base, top);
>  		size = max(size, 128UL << 10);
>  		if ((top - base) > size) {
> -			if (strict_kernel_rwx_enabled())
> -				pr_warn("Kernel _etext not properly aligned\n");
>  			size <<= 1;
> +			if (strict_kernel_rwx_enabled() && base + size > border)
> +				pr_warn("Some RW data is getting mapped X. "
> +					"Adjust CONFIG_DATA_SHIFT to avoid that.\n");
>  		}
>  		setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_TEXT);
>  		base += size;
> -- 
> 2.25.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ