lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56BE18AB.4090902@linaro.org>
Date:	Fri, 12 Feb 2016 09:38:51 -0800
From:	"Shi, Yang" <yang.shi@...aro.org>
To:	James Morse <james.morse@....com>, will.deacon@....com
Cc:	catalin.marinas@....com, linux-kernel@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org,
	linaro-kernel@...ts.linaro.org
Subject: Re: [PATCH] arm64: make irq_stack_ptr more robust

On 2/12/2016 5:47 AM, James Morse wrote:
> Hi!
>
> On 11/02/16 21:53, Yang Shi wrote:
>> Switching between stacks is only valid if we are tracing ourselves while on the
>> irq_stack, so it is only valid when in current and non-preemptible context,
>> otherwise is is just zeroed off.
>
> Given it was picked up with CONFIG_DEBUG_PREEMPT:
>
> Fixes: 132cd887b5c5 ("arm64: Modify stack trace and dump for use with irq_stack")

Wii add in v2.

>
>
>> Signed-off-by: Yang Shi <yang.shi@...aro.org>
>> ---
>>   arch/arm64/kernel/stacktrace.c | 13 ++++++-------
>>   arch/arm64/kernel/traps.c      | 11 ++++++++++-
>>   2 files changed, 16 insertions(+), 8 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
>> index 12a18cb..d9751a4 100644
>> --- a/arch/arm64/kernel/stacktrace.c
>> +++ b/arch/arm64/kernel/stacktrace.c
>> @@ -44,14 +44,13 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
>>   	unsigned long irq_stack_ptr;
>>
>>   	/*
>> -	 * Use raw_smp_processor_id() to avoid false-positives from
>> -	 * CONFIG_DEBUG_PREEMPT. get_wchan() calls unwind_frame() on sleeping
>> -	 * task stacks, we can be pre-empted in this case, so
>> -	 * {raw_,}smp_processor_id() may give us the wrong value. Sleeping
>> -	 * tasks can't ever be on an interrupt stack, so regardless of cpu,
>> -	 * the checks will always fail.
>> +	 * Switching between stacks is valid when tracing current and in
>> +	 * non-preemptible context.
>>   	 */
>> -	irq_stack_ptr = IRQ_STACK_PTR(raw_smp_processor_id());
>> +	if (tsk == current && !preemptible())
>> +		irq_stack_ptr = IRQ_STACK_PTR(smp_processor_id());
>> +	else
>> +		irq_stack_ptr = 0;
>>
>>   	low  = frame->sp;
>>   	/* irq stacks are not THREAD_SIZE aligned */
>> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
>> index cbedd72..7d8db3a 100644
>> --- a/arch/arm64/kernel/traps.c
>> +++ b/arch/arm64/kernel/traps.c
>> @@ -146,9 +146,18 @@ static void dump_instr(const char *lvl, struct pt_regs *regs)
>>   static void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
>>   {
>>   	struct stackframe frame;
>> -	unsigned long irq_stack_ptr = IRQ_STACK_PTR(smp_processor_id());
>> +	unsigned long irq_stack_ptr;
>>   	int skip;
>>
>> +	/*
>> +	 * Switching between  stacks is valid when tracing current and in
>
> Nit: Two spaces: "between[ ][ ]stacks"

Will fix in v2.

>
>
>> +	 * non-preemptible context.
>> +	 */
>> +	if (tsk == current && !preemptible())
>> +		irq_stack_ptr = IRQ_STACK_PTR(smp_processor_id());
>> +	else
>> +		irq_stack_ptr = 0;
>> +
>>   	pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
>>
>>   	if (!tsk)
>>
>
> Neither file includes 'linux/preempt.h' for the definition of preemptible().
> (I can't talk: I should have included smp.h for smp_processor_id())

I tried to build the kernel with preempt and without preempt, both 
works. And, I saw arch/arm64/include/asm/Kbuild has:

generic-y += preempt.h

So, it sounds preempt.h has been included by default.

Thanks,
Yang

>
>
> Acked-by: James Morse <james.morse@....com>
> Tested-by: James Morse <james.morse@....com>
>
>
> Thanks!
>
> James
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ