[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Ye7BGfOwNQbPQ37M@kroah.com>
Date: Mon, 24 Jan 2022 16:09:13 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: Masami Hiramatsu <mhiramat@...nel.org>
Cc: stable@...r.kernel.org, linux-kernel@...r.kernel.org,
Ingo Molnar <mingo@...nel.org>, Sasha Levin <sashal@...nel.org>
Subject: Re: [PATCH v2 5.4.y] Revert "ia64: kprobes: Use generic kretprobe
trampoline handler"
On Sat, Jan 15, 2022 at 09:59:16PM +0900, Masami Hiramatsu wrote:
> This reverts commit 77fa5e15c933a1ec812de61ad709c00aa51e96ae.
>
> Since the upstream commit e792ff804f49720ce003b3e4c618b5d996256a18
> depends on the generic kretprobe trampoline handler, which was
> introduced by commit 66ada2ccae4e ("kprobes: Add generic kretprobe
> trampoline handler") but that is not ported to the stable kernel
> because it is not a bugfix series.
> So revert this commit to fix a build error.
>
> NOTE: I keep commit a7fe2378454c ("ia64: kprobes: Fix to pass
> correct trampoline address to the handler") on the tree, that seems
> just a cleanup without the original reverted commit, but it would
> be better to use dereference_function_descriptor() macro instead
> of accessing descriptor's field directly.
>
> Fixes: 77fa5e15c933 ("ia64: kprobes: Use generic kretprobe trampoline handler")
> Reported-by: kernel test robot <lkp@...el.com>
> Signed-off-by: Masami Hiramatsu <mhiramat@...nel.org>
> ---
> Changes in v2:
> - fix the lack of type casting for dereference_function_descriptor().
> ---
> arch/ia64/kernel/kprobes.c | 78 ++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 75 insertions(+), 3 deletions(-)
>
> diff --git a/arch/ia64/kernel/kprobes.c b/arch/ia64/kernel/kprobes.c
> index 8a223d0e4918..fa10d51f6217 100644
> --- a/arch/ia64/kernel/kprobes.c
> +++ b/arch/ia64/kernel/kprobes.c
> @@ -396,10 +396,83 @@ static void kretprobe_trampoline(void)
> {
> }
>
> +/*
> + * At this point the target function has been tricked into
> + * returning into our trampoline. Lookup the associated instance
> + * and then:
> + * - call the handler function
> + * - cleanup by marking the instance as unused
> + * - long jump back to the original return address
> + */
> int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
> {
> - regs->cr_iip = __kretprobe_trampoline_handler(regs,
> - dereference_function_descriptor(kretprobe_trampoline), NULL);
> + struct kretprobe_instance *ri = NULL;
> + struct hlist_head *head, empty_rp;
> + struct hlist_node *tmp;
> + unsigned long flags, orig_ret_address = 0;
> + unsigned long trampoline_address =
> + (unsigned long)dereference_function_descriptor(kretprobe_trampoline);
> +
> + INIT_HLIST_HEAD(&empty_rp);
> + kretprobe_hash_lock(current, &head, &flags);
> +
> + /*
> + * It is possible to have multiple instances associated with a given
> + * task either because an multiple functions in the call path
> + * have a return probe installed on them, and/or more than one return
> + * return probe was registered for a target function.
> + *
> + * We can handle this because:
> + * - instances are always inserted at the head of the list
> + * - when multiple return probes are registered for the same
> + * function, the first instance's ret_addr will point to the
> + * real return address, and all the rest will point to
> + * kretprobe_trampoline
> + */
> + hlist_for_each_entry_safe(ri, tmp, head, hlist) {
> + if (ri->task != current)
> + /* another task is sharing our hash bucket */
> + continue;
> +
> + orig_ret_address = (unsigned long)ri->ret_addr;
> + if (orig_ret_address != trampoline_address)
> + /*
> + * This is the real return address. Any other
> + * instances associated with this task are for
> + * other calls deeper on the call stack
> + */
> + break;
> + }
> +
> + regs->cr_iip = orig_ret_address;
> +
> + hlist_for_each_entry_safe(ri, tmp, head, hlist) {
> + if (ri->task != current)
> + /* another task is sharing our hash bucket */
> + continue;
> +
> + if (ri->rp && ri->rp->handler)
> + ri->rp->handler(ri, regs);
> +
> + orig_ret_address = (unsigned long)ri->ret_addr;
> + recycle_rp_inst(ri, &empty_rp);
> +
> + if (orig_ret_address != trampoline_address)
> + /*
> + * This is the real return address. Any other
> + * instances associated with this task are for
> + * other calls deeper on the call stack
> + */
> + break;
> + }
> + kretprobe_assert(ri, orig_ret_address, trampoline_address);
> +
> + kretprobe_hash_unlock(current, &flags);
> +
> + hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
> + hlist_del(&ri->hlist);
> + kfree(ri);
> + }
> /*
> * By returning a non-zero value, we are telling
> * kprobe_handler() that we don't want the post_handler
> @@ -412,7 +485,6 @@ void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
> struct pt_regs *regs)
> {
> ri->ret_addr = (kprobe_opcode_t *)regs->b0;
> - ri->fp = NULL;
>
> /* Replace the return addr with trampoline addr */
> regs->b0 = (unsigned long)dereference_function_descriptor(kretprobe_trampoline);
>
Now queued up, thanks.
greg k-h
Powered by blists - more mailing lists