lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5345A4FA.1090008@codeaurora.org>
Date:	Wed, 09 Apr 2014 12:52:26 -0700
From:	Laura Abbott <lauraa@...eaurora.org>
To:	Kees Cook <keescook@...omium.org>,
	Steve Capper <steve.capper@...aro.org>
CC:	Catalin Marinas <catalin.marinas@....com>,
	Will Deacon <will.deacon@....com>,
	LKML <linux-kernel@...r.kernel.org>,
	Rabin Vincent <rabin@....in>,
	Alexander Holler <holler@...oftware.de>,
	Russell King <linux@....linux.org.uk>,
	"linux-arm-kernel@...ts.infradead.org" 
	<linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH v2 2/2] ARM: mm: allow text and rodata sections to be
 read-only

On 4/9/2014 9:12 AM, Kees Cook wrote:
> On Wed, Apr 9, 2014 at 2:02 AM, Steve Capper <steve.capper@...aro.org> wrote:
>> Hi Kees,
>>
>> On Mon, Apr 07, 2014 at 08:15:10PM -0700, Kees Cook wrote:
>>> This introduces CONFIG_DEBUG_RODATA, making kernel text and rodata
>>> read-only. Additionally, this splits rodata from text so that rodata can
>>> also be NX, which may lead to wasted memory when aligning to SECTION_SIZE.
>>>
>>> The read-only areas are made writable during ftrace updates. Additional
>>> work is needed for kprobes and kexec, so the feature is temporarily
>>> marked as unavailable in Kconfig when those options are selected.
>>>
>>> Signed-off-by: Kees Cook <keescook@...omium.org>
>>> ---
>>>  arch/arm/include/asm/cacheflush.h |    9 ++++++++
>>>  arch/arm/kernel/ftrace.c          |   17 ++++++++++++++
>>>  arch/arm/kernel/vmlinux.lds.S     |    3 +++
>>>  arch/arm/mm/Kconfig               |   12 ++++++++++
>>>  arch/arm/mm/init.c                |   46 +++++++++++++++++++++++++++++++++++++
>>>  5 files changed, 87 insertions(+)
>>>
>>> diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
>>> index 8b8b61685a34..b6fea0a1a88b 100644
>>> --- a/arch/arm/include/asm/cacheflush.h
>>> +++ b/arch/arm/include/asm/cacheflush.h
>>> @@ -487,4 +487,13 @@ int set_memory_rw(unsigned long addr, int numpages);
>>>  int set_memory_x(unsigned long addr, int numpages);
>>>  int set_memory_nx(unsigned long addr, int numpages);
>>>
>>> +#ifdef CONFIG_DEBUG_RODATA
>>> +void mark_rodata_ro(void);
>>> +void set_kernel_text_rw(void);
>>> +void set_kernel_text_ro(void);
>>> +#else
>>> +static inline void set_kernel_text_rw(void) { }
>>> +static inline void set_kernel_text_ro(void) { }
>>> +#endif
>>> +
>>>  #endif
>>> diff --git a/arch/arm/kernel/ftrace.c b/arch/arm/kernel/ftrace.c
>>> index af9a8a927a4e..ea446ae09c89 100644
>>> --- a/arch/arm/kernel/ftrace.c
>>> +++ b/arch/arm/kernel/ftrace.c
>>> @@ -15,6 +15,7 @@
>>>  #include <linux/ftrace.h>
>>>  #include <linux/uaccess.h>
>>>  #include <linux/module.h>
>>> +#include <linux/stop_machine.h>
>>>
>>>  #include <asm/cacheflush.h>
>>>  #include <asm/opcodes.h>
>>> @@ -35,6 +36,22 @@
>>>
>>>  #define      OLD_NOP         0xe1a00000      /* mov r0, r0 */
>>>
>>> +static int __ftrace_modify_code(void *data)
>>> +{
>>> +     int *command = data;
>>> +
>>> +     set_kernel_text_rw();
>>> +     ftrace_modify_all_code(*command);
>>> +     set_kernel_text_ro();
>>> +
>>> +     return 0;
>>> +}
>>
>> Would another approach be to keep all the kernel .text ro then override
>> probe_kernel_write (which has a weak reference), to create a separate
>> temporary rw mapping to the specific page that needs to be modified?
>>
>> That way you only worry about TLB and cache maintenance for a smaller
>> area. Also, your kernel .text VAs never actually become writable, so
>> you don't need to worry as much about unauthorised changes whilst your
>> guard is temporarily down.
>>
>> (Though lots of small changes could probably make this more
>> expensive, and you will need to double check aliasing in pre-ARMv7).
> 
> As I understand it, early boot needs some of these areas RWX. Doing
> the protection during init-free means we can avoid all that and still
> allow the memory to get reclaimed. As to not doing section
> re-mappings, I share the same concern about it being very expensive to
> do lots of small changes. As such, I think this is the cleanest
> approach that is still portable.
> 

FWIW, our out of tree patches set up the permissions at map_lowmem time
and we've never run into any issue with incorrect RWX permissions to
the best of my knowledge. 

Just for comparison, how many small changes would need to happen for an
ftrace use case? Would these changes be happening on a hot path?

> -Kees
> 

Thanks,
Laura
-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ