[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f67efae5-1565-5459-2877-31bdd1a40c0f@arm.com>
Date: Tue, 28 May 2019 08:24:50 +0200
From: Ard Biesheuvel <ard.biesheuvel@....com>
To: Anshuman Khandual <anshuman.khandual@....com>,
linux-arm-kernel@...ts.infradead.org
Cc: mark.rutland@....com, marc.zyngier@....com,
Will Deacon <will.deacon@....com>,
linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Nadav Amit <namit@...are.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
James Morse <james.morse@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Rick Edgecombe <rick.p.edgecombe@...el.com>
Subject: Re: [PATCH 1/4] arm64: module: create module allocations without exec
permissions
On 5/28/19 7:35 AM, Anshuman Khandual wrote:
>
>
> On 05/23/2019 03:52 PM, Ard Biesheuvel wrote:
>> Now that the core code manages the executable permissions of code
>> regions of modules explicitly, it is no longer necessary to create
>
> I guess the permission transition for various module sections happen
> through module_enable_[ro|nx]() after allocating via module_alloc().
>
Indeed.
>> the module vmalloc regions with RWX permissions, and we can create
>> them with RW- permissions instead, which is preferred from a
>> security perspective.
>
> Makes sense. Will this be followed in all architectures now ?
>
I am not sure if every architecture implements module_enable_[ro|nx](),
but if they do, they should probably apply this change as well.
>>
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@....com>
>> ---
>> arch/arm64/kernel/module.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
>> index 2e4e3915b4d0..88f0ed31d9aa 100644
>> --- a/arch/arm64/kernel/module.c
>> +++ b/arch/arm64/kernel/module.c
>> @@ -41,7 +41,7 @@ void *module_alloc(unsigned long size)
>>
>> p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
>> module_alloc_base + MODULES_VSIZE,
>> - gfp_mask, PAGE_KERNEL_EXEC, 0,
>> + gfp_mask, PAGE_KERNEL, 0,
>> NUMA_NO_NODE, __builtin_return_address(0));
>>
>> if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
>> @@ -57,7 +57,7 @@ void *module_alloc(unsigned long size)
>> */
>> p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
>> module_alloc_base + SZ_4G, GFP_KERNEL,
>> - PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
>> + PAGE_KERNEL, 0, NUMA_NO_NODE,
>> __builtin_return_address(0));
>>
>> if (p && (kasan_module_alloc(p, size) < 0)) {
>>
>
> Which just makes sure that PTE_PXN never gets dropped while creating
> these mappings.
>
Not sure what you mean. Is there a question here?
Powered by blists - more mailing lists