[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2D68B83E59A24077+20251030093644.430507-1-hongao@uniontech.com>
Date: Thu, 30 Oct 2025 17:36:44 +0800
From: hongao <hongao@...ontech.com>
To: mhiramat@...nel.org
Cc: naveen@...nel.org,
anil.s.keshavamurthy@...el.com,
davem@...emloft.net,
linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org,
hongao <hongao@...ontech.com>
Subject: [[PATCH v2] 1/1] kprobes: retry pending optprobe after freeing blocker
Thanks for the review.
The freeing_list cleanup now retries optimizing any sibling probe that was
deferred while this aggregator was being torn down. Track the pending
address in struct optimized_kprobe so __disarm_kprobe() can defer the
retry until kprobe_optimizer() finishes disarming.
Signed-off-by: hongao <hongao@...ontech.com>
---
Changes since v1:
- Replace `kprobe_opcode_t *pending_reopt_addr` with `bool reopt_unblocked_probes`
in `struct optimized_kprobe` to avoid storing an address and simplify logic.
- Use `op->kp.addr` when looking up the sibling optimized probe instead of
keeping a separate stored address.
- Defer re-optimization by setting/clearing `op->reopt_unblocked_probes` in
`__disarm_kprobe()` / consuming it in `do_free_cleaned_kprobes()` so the
retry runs after the worker finishes disarming.
---
include/linux/kprobes.h | 1 +
kernel/kprobes.c | 28 ++++++++++++++++++++++------
2 files changed, 23 insertions(+), 6 deletions(-)
diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 8c4f3bb24..4f49925a4 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -338,6 +338,7 @@ DEFINE_INSN_CACHE_OPS(insn);
struct optimized_kprobe {
struct kprobe kp;
struct list_head list; /* list for optimizing queue */
+ bool reopt_unblocked_probes;
struct arch_optimized_insn optinsn;
};
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index da59c68df..799542dff 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -514,6 +514,7 @@ static LIST_HEAD(freeing_list);
static void kprobe_optimizer(struct work_struct *work);
static DECLARE_DELAYED_WORK(optimizing_work, kprobe_optimizer);
+static void optimize_kprobe(struct kprobe *p);
#define OPTIMIZE_DELAY 5
/*
@@ -591,6 +592,21 @@ static void do_free_cleaned_kprobes(void)
*/
continue;
}
+ if (op->reopt_unblocked_probes) {
+ struct kprobe *unblocked;
+
+ /*
+ * The aggregator was holding back another probe while it sat on the
+ * unoptimizing/freeing lists. Now that the aggregator has been fully
+ * reverted we can safely retry the optimization of that sibling.
+ */
+
+ unblocked = get_optimized_kprobe(op->kp.addr);
+ if (unlikely(unblocked))
+ optimize_kprobe(unblocked);
+ op->reopt_unblocked_probes = false;
+ }
+
free_aggr_kprobe(&op->kp);
}
}
@@ -1009,13 +1025,13 @@ static void __disarm_kprobe(struct kprobe *p, bool reopt)
_p = get_optimized_kprobe(p->addr);
if (unlikely(_p) && reopt)
optimize_kprobe(_p);
+ } else if (reopt && kprobe_aggrprobe(p)) {
+ struct optimized_kprobe *op =
+ container_of(p, struct optimized_kprobe, kp);
+
+ /* Defer the re-optimization until the worker finishes disarming. */
+ op->reopt_unblocked_probes = true;
}
- /*
- * TODO: Since unoptimization and real disarming will be done by
- * the worker thread, we can not check whether another probe are
- * unoptimized because of this probe here. It should be re-optimized
- * by the worker thread.
- */
}
#else /* !CONFIG_OPTPROBES */
--
2.47.2
Powered by blists - more mailing lists