lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20260105135800.1f9f05ab5635e2ab85a2f2bd@kernel.org>
Date: Mon, 5 Jan 2026 13:58:00 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: hongao <hongao@...ontech.com>
Cc: naveen@...nel.org, anil.s.keshavamurthy@...el.com, davem@...emloft.net,
 linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org
Subject: Re: [[PATCH v2] 1/1] kprobes: retry pending optprobe after freeing
 blocker

Hi Hongao,

Thanks for updating. After a detailed review, we don't need a new boolean
flag for it. Since the "queued unused probe" (which is handled by
do_free_cleaned_kprobes() eventually) is always disarmed. Thus, we only
need to check all probes to reoptimize sibling probes in the
do_free_cleaned_kprobes().

On Wed, 10 Dec 2025 11:33:21 +0800
hongao <hongao@...ontech.com> wrote:

> The freeing_list cleanup now retries optimizing any sibling probe that was
> deferred while this aggregator was being torn down.  Track the pending
> address in struct optimized_kprobe so __disarm_kprobe() can defer the
> retry until kprobe_optimizer() finishes disarming.
> 
> Signed-off-by: hongao <hongao@...ontech.com>
> ---
> Changes since v1:
> - Replace `kprobe_opcode_t *pending_reopt_addr` with `bool reopt_unblocked_probes`
>   in `struct optimized_kprobe` to avoid storing an address and simplify logic.
> - Use `op->kp.addr` when looking up the sibling optimized probe instead of
>   keeping a separate stored address.
> - Defer re-optimization by setting/clearing `op->reopt_unblocked_probes` in
>   `__disarm_kprobe()` / consuming it in `do_free_cleaned_kprobes()` so the
>   retry runs after the worker finishes disarming.
> - Link to v1: https://lore.kernel.org/all/2B0BC73E9D190B7B+20251027130535.2296913-1-hongao@uniontech.com/
> ---
>  include/linux/kprobes.h |  1 +
>  kernel/kprobes.c        | 28 ++++++++++++++++++++++------
>  2 files changed, 23 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
> index 8c4f3bb24..4f49925a4 100644
> --- a/include/linux/kprobes.h
> +++ b/include/linux/kprobes.h
> @@ -338,6 +338,7 @@ DEFINE_INSN_CACHE_OPS(insn);
>  struct optimized_kprobe {
>  	struct kprobe kp;
>  	struct list_head list;	/* list for optimizing queue */
> +	bool reopt_unblocked_probes;
>  	struct arch_optimized_insn optinsn;
>  };

This is not needed. 

>  
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index da59c68df..799542dff 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -514,6 +514,7 @@ static LIST_HEAD(freeing_list);
>  
>  static void kprobe_optimizer(struct work_struct *work);
>  static DECLARE_DELAYED_WORK(optimizing_work, kprobe_optimizer);
> +static void optimize_kprobe(struct kprobe *p);
>  #define OPTIMIZE_DELAY 5
>  
>  /*
> @@ -591,6 +592,21 @@ static void do_free_cleaned_kprobes(void)
>  			 */
>  			continue;
>  		}
> +		if (op->reopt_unblocked_probes) {
> +			struct kprobe *unblocked;
> +
> +			/*
> +			 * The aggregator was holding back another probe while it sat on the
> +			 * unoptimizing/freeing lists.  Now that the aggregator has been fully
> +			 * reverted we can safely retry the optimization of that sibling.
> +			 */
> +
> +			unblocked = get_optimized_kprobe(op->kp.addr);
> +			if (unlikely(unblocked))
> +				optimize_kprobe(unblocked);
> +			op->reopt_unblocked_probes = false;
> +		}

This is what we need. (but do not need to check/update reopt_unblocked_probes.

> +
>  		free_aggr_kprobe(&op->kp);
>  	}
>  }



> @@ -1009,13 +1025,13 @@ static void __disarm_kprobe(struct kprobe *p, bool reopt)
>  		_p = get_optimized_kprobe(p->addr);
>  		if (unlikely(_p) && reopt)
>  			optimize_kprobe(_p);
> +	} else if (reopt && kprobe_aggrprobe(p)) {
> +		struct optimized_kprobe *op =
> +			container_of(p, struct optimized_kprobe, kp);
> +
> +		/* Defer the re-optimization until the worker finishes disarming. */
> +		op->reopt_unblocked_probes = true;

Do not need this.

>  	}
> -	/*
> -	 * TODO: Since unoptimization and real disarming will be done by
> -	 * the worker thread, we can not check whether another probe are
> -	 * unoptimized because of this probe here. It should be re-optimized
> -	 * by the worker thread.
> -	 */

Only remove this comment.

Thank you,

>  }
>  
>  #else /* !CONFIG_OPTPROBES */
> -- 
> 2.47.2
> 


-- 
Masami Hiramatsu (Google) <mhiramat@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ