[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230216000849.f0e332ebf35150cb5d5c4a29@kernel.org>
Date: Thu, 16 Feb 2023 00:08:49 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: Yang Jihong <yangjihong1@...wei.com>
Cc: <tglx@...utronix.de>, <mingo@...hat.com>, <bp@...en8.de>,
<dave.hansen@...ux.intel.com>, <x86@...nel.org>, <hpa@...or.com>,
<naveen.n.rao@...ux.ibm.com>, <anil.s.keshavamurthy@...el.com>,
<davem@...emloft.net>, <ast@...nel.org>, <peterz@...radead.org>,
<linux-kernel@...r.kernel.org>,
<linux-trace-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/3] x86/kprobes: Fix __recover_optprobed_insn check
optimizing logic
On Wed, 15 Feb 2023 19:54:29 +0800
Yang Jihong <yangjihong1@...wei.com> wrote:
> Since the following commit:
>
> commit f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code")
>
> modified the update timing of the KPROBE_FLAG_OPTIMIZED, a optimized_kprobe
> may be in the optimizing or unoptimizing state when op.kp->flags
> has KPROBE_FLAG_OPTIMIZED and op->list is not empty.
>
> The __recover_optprobed_insn check logic is incorrect, a kprobe in the
> unoptimizing state may be incorrectly determined as unoptimizing.
> As a result, incorrect instructions are copied.
Ah, good catch!
>
> The optprobe_queued_unopt function needs to be exported for invoking in
> arch directory.
>
> Fixes: f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code")
Cc: stable@...r.kernel.org
> Signed-off-by: Yang Jihong <yangjihong1@...wei.com>
> ---
> arch/x86/kernel/kprobes/opt.c | 4 ++--
> include/linux/kprobes.h | 1 +
> kernel/kprobes.c | 2 +-
> 3 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
> index e57e07b0edb6..3718d6863555 100644
> --- a/arch/x86/kernel/kprobes/opt.c
> +++ b/arch/x86/kernel/kprobes/opt.c
> @@ -46,8 +46,8 @@ unsigned long __recover_optprobed_insn(kprobe_opcode_t *buf, unsigned long addr)
> /* This function only handles jump-optimized kprobe */
> if (kp && kprobe_optimized(kp)) {
> op = container_of(kp, struct optimized_kprobe, kp);
> - /* If op->list is not empty, op is under optimizing */
> - if (list_empty(&op->list))
> + /* If op is [un]optimized or under unoptimizing */
Hmm, this is a bit confusing comment. If it is unoptimized, the kprobe_optimized() returns false.
Thus the comment should be /* If op is optimized or under unoptimizing */.
Thank you,
> + if (list_empty(&op->list) || optprobe_queued_unopt(op))
> goto found;
> }
> }
> diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
> index a0b92be98984..ab39285f71a6 100644
> --- a/include/linux/kprobes.h
> +++ b/include/linux/kprobes.h
> @@ -378,6 +378,7 @@ extern void opt_pre_handler(struct kprobe *p, struct pt_regs *regs);
> DEFINE_INSN_CACHE_OPS(optinsn);
>
> extern void wait_for_kprobe_optimizer(void);
> +bool optprobe_queued_unopt(struct optimized_kprobe *op);
> #else /* !CONFIG_OPTPROBES */
> static inline void wait_for_kprobe_optimizer(void) { }
> #endif /* CONFIG_OPTPROBES */
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 0730e595f4c1..bf60eb26c873 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -661,7 +661,7 @@ void wait_for_kprobe_optimizer(void)
> mutex_unlock(&kprobe_mutex);
> }
>
> -static bool optprobe_queued_unopt(struct optimized_kprobe *op)
> +bool optprobe_queued_unopt(struct optimized_kprobe *op)
> {
> struct optimized_kprobe *_op;
>
> --
> 2.30.GIT
>
--
Masami Hiramatsu (Google) <mhiramat@...nel.org>
Powered by blists - more mailing lists