[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <mhng-fa489ba7-f1c7-459c-aae0-0dc68c826635@palmerdabbelt-glaptop1>
Date: Wed, 04 Mar 2020 16:58:45 -0800 (PST)
From: Palmer Dabbelt <palmer@...belt.com>
To: zong.li@...ive.com
CC: Paul Walmsley <paul.walmsley@...ive.com>, aou@...s.berkeley.edu,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
zong.li@...ive.com
Subject: Re: [PATCH 5/8] riscv: add alignment for text, rodata and data sections
On Mon, 17 Feb 2020 00:32:20 PST (-0800), zong.li@...ive.com wrote:
> The kernel mapping will tried to optimize its mapping by using bigger
> size. In rv64, it tries to use PMD_SIZE, and tryies to use PGDIR_SIZE in
> rv32. To ensure that the start address of these sections could fit the
> mapping entry size, make them align to the biggest alignment.
>
> Define a macro SECTION_ALIGN because the HPAGE_SIZE or PMD_SIZE, etc.,
> are invisible in linker script.
>
> This patch is prepared for STRICT_KERNEL_RWX support.
>
> Signed-off-by: Zong Li <zong.li@...ive.com>
> ---
> arch/riscv/include/asm/set_memory.h | 13 +++++++++++++
> arch/riscv/kernel/vmlinux.lds.S | 4 +++-
> 2 files changed, 16 insertions(+), 1 deletion(-)
>
> diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
> index a9783a878dca..a91f192063c2 100644
> --- a/arch/riscv/include/asm/set_memory.h
> +++ b/arch/riscv/include/asm/set_memory.h
> @@ -6,6 +6,7 @@
> #ifndef _ASM_RISCV_SET_MEMORY_H
> #define _ASM_RISCV_SET_MEMORY_H
>
> +#ifndef __ASSEMBLY__
> /*
> * Functions to change memory attributes.
> */
> @@ -17,4 +18,16 @@ int set_memory_nx(unsigned long addr, int numpages);
> int set_direct_map_invalid_noflush(struct page *page);
> int set_direct_map_default_noflush(struct page *page);
>
> +#endif /* __ASSEMBLY__ */
> +
> +#ifdef CONFIG_ARCH_HAS_STRICT_KERNEL_RWX
> +#ifdef CONFIG_64BIT
> +#define SECTION_ALIGN (1 << 21)
> +#else
> +#define SECTION_ALIGN (1 << 22)
> +#endif
> +#else /* !CONFIG_ARCH_HAS_STRICT_KERNEL_RWX */
> +#define SECTION_ALIGN L1_CACHE_BYTES
> +#endif /* CONFIG_ARCH_HAS_STRICT_KERNEL_RWX */
> +
> #endif /* _ASM_RISCV_SET_MEMORY_H */
> diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
> index 4ba8a5397e8b..0b145b9c1778 100644
> --- a/arch/riscv/kernel/vmlinux.lds.S
> +++ b/arch/riscv/kernel/vmlinux.lds.S
> @@ -37,6 +37,7 @@ SECTIONS
> PERCPU_SECTION(L1_CACHE_BYTES)
> __init_end = .;
>
> + . = ALIGN(SECTION_ALIGN);
> .text : {
> _text = .;
> _stext = .;
> @@ -53,13 +54,14 @@ SECTIONS
> }
>
> /* Start of data section */
> - RO_DATA(L1_CACHE_BYTES)
> + RO_DATA(SECTION_ALIGN)
> .srodata : {
> *(.srodata*)
> }
>
> EXCEPTION_TABLE(0x10)
>
> + . = ALIGN(SECTION_ALIGN);
> _sdata = .;
>
> RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
Reviewed-by: Palmer Dabbelt <palmerdabbelt@...gle.com>
Powered by blists - more mailing lists