[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230215115430.236046-2-yangjihong1@huawei.com>
Date: Wed, 15 Feb 2023 19:54:28 +0800
From: Yang Jihong <yangjihong1@...wei.com>
To: <tglx@...utronix.de>, <mingo@...hat.com>, <bp@...en8.de>,
<dave.hansen@...ux.intel.com>, <x86@...nel.org>, <hpa@...or.com>,
<naveen.n.rao@...ux.ibm.com>, <anil.s.keshavamurthy@...el.com>,
<davem@...emloft.net>, <mhiramat@...nel.org>, <ast@...nel.org>,
<peterz@...radead.org>, <linux-kernel@...r.kernel.org>,
<linux-trace-kernel@...r.kernel.org>
CC: <yangjihong1@...wei.com>
Subject: [PATCH 1/3] kprobes: Fixed probe nodes not correctly removed when forcibly unoptimized
When unoptimize_kprobe forcibly unoptimize the kprobe, simply queue it in
the freeing_list, and do_free_cleaned_kprobes directly reclaims the kprobe
if unoptimizing_list is empty (see do_unoptimize_kprobes), which may cause
WARN or UAF problems.
The specific scenarios are as follows:
Thread1
arm_kprobe(p)
mutex_lock(&kprobe_mutex)
__arm_kprobe(kp)
p = get_optimized_kprobe(p->addr)
if (unlikely(_p))
unoptimize_kprobe(_p, true) // now _p is queued in freeing_list
mutex_unlock(&kprobe_mutex)
Thread2
kprobe_optimizer
mutex_lock(&kprobe_mutex)
do_unoptimize_kprobes
if (list_empty(&unoptimizing_list))
return; // here directly returned and does not process freeing_list.
...
do_free_cleaned_kprobes
foreach op in freeing_list:
WARN_ON_ONCE(!kprobe_unused(&op->kp)) // WANR will be triggered here.
free_aggr_kprobe((&op->kp) // Delete op->kp directly, if access hash
// list later, UAF problem will be triggered.
mutex_unlock(&kprobe_mutex)
The freeing_list needs to be processed in do_unoptimize_kprobes regardless
of whether unoptimizing_list is empty.
Signed-off-by: Yang Jihong <yangjihong1@...wei.com>
---
kernel/kprobes.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 1c18ecf9f98b..0730e595f4c1 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -556,10 +556,9 @@ static void do_unoptimize_kprobes(void)
lockdep_assert_cpus_held();
/* Unoptimization must be done anytime */
- if (list_empty(&unoptimizing_list))
- return;
+ if (!list_empty(&unoptimizing_list))
+ arch_unoptimize_kprobes(&unoptimizing_list, &freeing_list);
- arch_unoptimize_kprobes(&unoptimizing_list, &freeing_list);
/* Loop on 'freeing_list' for disarming */
list_for_each_entry_safe(op, tmp, &freeing_list, list) {
/* Switching from detour code to origin */
--
2.30.GIT
Powered by blists - more mailing lists