lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <849ae148-85cd-5f46-d98b-b827cc9c605c@oracle.com>
Date:   Wed, 31 Oct 2018 14:01:20 +0800
From:   Zhenzhong Duan <zhenzhong.duan@...cle.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Linux-Kernel <linux-kernel@...r.kernel.org>, mingo@...hat.com,
        konrad.wilk@...cle.com, dwmw@...zon.co.uk, tglx@...utronix.de,
        Srinivas REDDY Eeda <srinivas.eeda@...cle.com>, bp@...e.de,
        hpa@...or.com
Subject: Re: [PATCH 3/3] kprobes/x86: Simplify indirect-jump check in
 retpoline

On 2018/10/30 16:36, Peter Zijlstra wrote:
> On Mon, Oct 29, 2018 at 11:55:06PM -0700, Zhenzhong Duan wrote:
>> Since CONFIG_RETPOLINE hard depends on compiler support now, so
>> replacing indirect-jump check with the range check is safe in that case.
> 
> Can we put kprobes on module init text before we run alternatives on it?

Forgive me I doesn't understand your question. Do you mean this patch 
impact kprobes on module init text?

> 
>> @@ -240,20 +242,16 @@ static int insn_jump_into_range(struct insn *insn, unsigned long start, int len)
>>   
>>   static int insn_is_indirect_jump(struct insn *insn)
>>   {
>> -	int ret = __insn_is_indirect_jump(insn);
>> +	int ret;
>>   
>>   #ifdef CONFIG_RETPOLINE
>> -	/*
>> -	 * Jump to x86_indirect_thunk_* is treated as an indirect jump.
>> -	 * Note that even with CONFIG_RETPOLINE=y, the kernel compiled with
>> -	 * older gcc may use indirect jump. So we add this check instead of
>> -	 * replace indirect-jump check.
>> -	 */
>> -	if (!ret)
>> +	/* Jump to x86_indirect_thunk_* is treated as an indirect jump. */
>>   		ret = insn_jump_into_range(insn,
>>   				(unsigned long)__indirect_thunk_start,
>>   				(unsigned long)__indirect_thunk_end -
>>   				(unsigned long)__indirect_thunk_start);
>> +#else
>> +		ret = __insn_is_indirect_jump(insn);
>>   #endif
>>   	return ret;
>>   }
> 
> The resulting code is indented wrong.
> 

Oh, yes. Thanks for point out.

Zhenzhong

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ