lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1525680222.8nou0tzkkt.naveen@linux.ibm.com>
Date:   Mon, 07 May 2018 13:41:53 +0530
From:   "Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com>
To:     Masami Hiramatsu <mhiramat@...nel.org>
Cc:     Arnaldo Carvalho de Melo <acme@...nel.org>,
        linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
        linux-trace-users@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
        Namhyung Kim <namhyung@...nel.org>,
        Ravi Bangoria <ravi.bangoria@...ux.vnet.ibm.com>,
        Steven Rostedt <rostedt@...dmis.org>, shuah@...nel.org,
        Tom Zanussi <tom.zanussi@...ux.intel.com>,
        Ananth N Mavinakayanahalli <ananth@...ux.vnet.ibm.com>
Subject: Re: [PATCH v7 00/16] tracing: probeevent: Improve fetcharg features

Masami Hiramatsu wrote:
> On Sat, 05 May 2018 13:16:04 +0530
> "Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com> wrote:
> 
>> Masami Hiramatsu wrote:
>> > On Fri, 4 May 2018 12:06:42 -0400
>> > Steven Rostedt <rostedt@...dmis.org> wrote:
>> > 
>> >> On Sat, 5 May 2018 00:48:28 +0900
>> >> Masami Hiramatsu <mhiramat@...nel.org> wrote:
>> >> 
>> >> > > Also, when looking at the kprobe code, I was looking at this 
>> >> > > function:
>> >> > >   
>> >> > > > /* Ftrace callback handler for kprobes -- called under preepmt disabed */
>> >> > > > void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
>> >> > > > 			   struct ftrace_ops *ops, struct pt_regs *regs)
>> >> > > > {
>> >> > > > 	struct kprobe *p;
>> >> > > > 	struct kprobe_ctlblk *kcb;
>> >> > > > 
>> >> > > > 	/* Preempt is disabled by ftrace */
>> >> > > > 	p = get_kprobe((kprobe_opcode_t *)ip);
>> >> > > > 	if (unlikely(!p) || kprobe_disabled(p))
>> >> > > > 		return;
>> >> > > > 
>> >> > > > 	kcb = get_kprobe_ctlblk();
>> >> > > > 	if (kprobe_running()) {
>> >> > > > 		kprobes_inc_nmissed_count(p);
>> >> > > > 	} else {
>> >> > > > 		unsigned long orig_ip = regs->ip;
>> >> > > > 		/* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */
>> >> > > > 		regs->ip = ip + sizeof(kprobe_opcode_t);
>> >> > > > 
>> >> > > > 		/* To emulate trap based kprobes, preempt_disable here */
>> >> > > > 		preempt_disable();
>> >> > > > 		__this_cpu_write(current_kprobe, p);
>> >> > > > 		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
>> >> > > > 		if (!p->pre_handler || !p->pre_handler(p, regs)) {
>> >> > > > 			__skip_singlestep(p, regs, kcb, orig_ip);
>> >> > > > 			preempt_enable_no_resched();  
>> >> > > 
>> >> > > This preemption disabling and enabling looks rather strange. Looking at
>> >> > > git blame, it appears this was added for jprobes. Can we remove it now
>> >> > > that jprobes is going away?  
>> >> > 
>> >> > No, that is not for jprobes but for compatibility with kprobe's user
>> >> > handler. Since this transformation is done silently, user can not
>> >> > change their handler for ftrace case. So we need to keep this condition
>> >> > same as original kprobes.
>> >> > 
>> >> > And anyway, for using smp_processor_id() for accessing per-cpu,
>> >> > we should disable preemption, correct?
>> >> 
>> >> But as stated at the start of the function:
>> >> 
>> >>  /* Preempt is disabled by ftrace */
>> > 
>> > Ah, yes. So this is only for the jprobes.
>> > 
>> >> 
>> >> 
>> >> The reason I ask, is that we have for this function:
>> >> 
>> >> 		/* To emulate trap based kprobes, preempt_disable here */
>> >> 		preempt_disable();
>> >> 		__this_cpu_write(current_kprobe, p);
>> >> 		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
>> >> 		if (!p->pre_handler || !p->pre_handler(p, regs)) {
>> >> 			__skip_singlestep(p, regs, kcb, orig_ip);
>> >> 			preempt_enable_no_resched();
>> >> 		}
>> >> 
>> >> And in arch/x86/kernel/kprobes/core.c we have:
>> >> 
>> >> 	preempt_disable();
>> >> 
>> >> 	kcb = get_kprobe_ctlblk();
>> >> 	p = get_kprobe(addr);
>> >> 
>> >> 	if (p) {
>> >> 		if (kprobe_running()) {
>> >> 			if (reenter_kprobe(p, regs, kcb))
>> >> 				return 1;
>> >> 		} else {
>> >> 			set_current_kprobe(p, regs, kcb);
>> >> 			kcb->kprobe_status = KPROBE_HIT_ACTIVE;
>> >> 
>> >> 			/*
>> >> 			 * If we have no pre-handler or it returned 0, we
>> >> 			 * continue with normal processing.  If we have a
>> >> 			 * pre-handler and it returned non-zero, it prepped
>> >> 			 * for calling the break_handler below on re-entry
>> >> 			 * for jprobe processing, so get out doing nothing
>> >> 			 * more here.
>> >> 			 */
>> >> 			if (!p->pre_handler || !p->pre_handler(p, regs))
>> >> 				setup_singlestep(p, regs, kcb, 0);
>> >> 			return 1;
>> >> 
>> >> 
>> >> Which is why I thought it was for jprobes. I'm a bit confused about
>> >> where preemption is enabled again.
>> > 
>> > You're right. So I would like to remove it with x86 jprobe support
>> > code to avoid inconsistency.
>> 
>> I didn't understand that. Which code are you planning to remove? Can you 
>> please elaborate? I thought we still need to disable preemption in the 
>> ftrace handler.
> 
> Yes, kprobe_ftrace_handler itself must be run under preempt disabled
> because it depends on a per-cpu variable. What I will remove is the
> redundant preempt disable/enable_noresched (unbalanced) pair in the
> kprobe_ftrace_handler, and jprobe x86 ports which is no more used.

Won't that break out-of-tree users depending on returning a non-zero 
value to handle preemption differently? You seem to have alluded to it 
earlier in the mail chain above where you said that this is not just for 
jprobes (though it was added for jprobes as the main use case).

- Naveen


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ