[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200828105714.b499777a12e5cd5d11855f8b@kernel.org>
Date: Fri, 28 Aug 2020 10:57:14 +0900
From: Masami Hiramatsu <mhiramat@...nel.org>
To: Masami Hiramatsu <mhiramat@...nel.org>
Cc: linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Eddy Wu <Eddy_Wu@...ndmicro.com>, x86@...nel.org,
"David S . Miller" <davem@...emloft.net>,
Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...hat.com>,
"Naveen N . Rao" <naveen.n.rao@...ux.ibm.com>,
Anil S Keshavamurthy <anil.s.keshavamurthy@...el.com>,
linux-arch@...r.kernel.org, guoren@...nel.org
Subject: Re: [PATCH v3 01/16] kprobes: Add generic kretprobe trampoline
handler
On Fri, 28 Aug 2020 01:38:44 +0900
Masami Hiramatsu <mhiramat@...nel.org> wrote:
> +unsigned long __kretprobe_trampoline_handler(struct pt_regs *regs,
> + unsigned long trampoline_address,
> + void *frame_pointer)
> +{
> + struct kretprobe_instance *ri = NULL;
> + struct hlist_head *head, empty_rp;
> + struct hlist_node *tmp;
> + unsigned long flags, orig_ret_address = 0;
> + kprobe_opcode_t *correct_ret_addr = NULL;
> + bool skipped = false;
> +
> + INIT_HLIST_HEAD(&empty_rp);
> + kretprobe_hash_lock(current, &head, &flags);
> +
> + /*
> + * It is possible to have multiple instances associated with a given
> + * task either because multiple functions in the call path have
> + * return probes installed on them, and/or more than one
> + * return probe was registered for a target function.
> + *
> + * We can handle this because:
> + * - instances are always pushed into the head of the list
> + * - when multiple return probes are registered for the same
> + * function, the (chronologically) first instance's ret_addr
> + * will be the real return address, and all the rest will
> + * point to kretprobe_trampoline.
> + */
> + hlist_for_each_entry(ri, head, hlist) {
> + if (ri->task != current)
> + /* another task is sharing our hash bucket */
> + continue;
> + /*
> + * Return probes must be pushed on this hash list correct
> + * order (same as return order) so that it can be popped
> + * correctly. However, if we find it is pushed it incorrect
> + * order, this means we find a function which should not be
> + * probed, because the wrong order entry is pushed on the
> + * path of processing other kretprobe itself.
> + */
> + if (ri->fp != frame_pointer) {
> + if (!skipped)
> + pr_warn("kretprobe is stacked incorrectly. Trying to fixup.\n");
> + skipped = true;
> + continue;
> + }
> +
> + orig_ret_address = (unsigned long)ri->ret_addr;
> + if (skipped)
> + pr_warn("%ps must be blacklisted because of incorrect kretprobe order\n",
> + ri->rp->kp.addr);
> +
> + if (orig_ret_address != trampoline_address)
> + /*
> + * This is the real return address. Any other
> + * instances associated with this task are for
> + * other calls deeper on the call stack
> + */
> + break;
> + }
> +
> + kretprobe_assert(ri, orig_ret_address, trampoline_address);
> +
> + correct_ret_addr = ri->ret_addr;
Oops, here is an insane code... why we have orig_ret_address *and* correct_ret_addr?
I'll clean this up.
Thanks,
--
Masami Hiramatsu <mhiramat@...nel.org>
Powered by blists - more mailing lists