[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6833C4F1.3030300@huaweicloud.com>
Date: Mon, 26 May 2025 09:33:37 +0800
From: yebin <yebin@...weicloud.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: mhiramat@...nel.org, mathieu.desnoyers@...icios.com,
mark.rutland@....com, linux-trace-kernel@...r.kernel.org,
linux-kernel@...r.kernel.org, yebin10@...wei.com
Subject: Re: [PATCH 1/2] ftrace: fix UAF when lookup kallsym after ftrace
disabled
On 2025/5/24 1:54, Steven Rostedt wrote:
> On Fri, 23 May 2025 16:39:44 +0800
> Ye Bin <yebin@...weicloud.com> wrote:
>
>> Above issue may happens as follow:
>> (1) Add kprobe trace point;
>> (2) insmod test.ko;
>> (3) Trigger ftrace disabled;
>
> This is the bug. How was ftrace_disabled triggered? That should never
> happen. Was test.ko buggy?
>
Yes. The following warning is reported during concurrent registration
between register_kprobe() and live patch, causing ftrace_disabled.
WARNING: CPU: 56 PID: 2769 at kernel/trace/ftrace.c:2612
ftrace_modify_all_code+0x116/0x140
>> (4) rmmod test.ko;
>> (5) cat /proc/kallsyms; --> Will trigger UAF as test.ko already removed;
>> ftrace_mod_get_kallsym()
>> ...
>> strscpy(module_name, mod_map->mod->name, MODULE_NAME_LEN);
>> ...
>>
>> As ftrace_release_mod() judge 'ftrace_disabled' is true will return, and
>> 'mod_map' will remaining in ftrace_mod_maps. 'mod_map' has no chance to
>> release. Therefore, this also causes residual resources to accumulate.
>> To solve above issue, unconditionally clean up'mod_map'.
>>
>> Fixes: aba4b5c22cba ("ftrace: Save module init functions kallsyms symbols for tracing")
>
> This is *not* a fix. ftrace_disabled gets set when a bug is triggered. If
> this prevents ftrace_disabled from getting set, then it would be a fix. But
> if something else happens when ftrace_disabled is set, it just fixes a
> symptom and not the bug itself.
>
There are multiple causes for triggering ftrace_disabled. I agree that
aba4b5c22cba is not faulty. However, the incorporation of this patch
will cause problems due to triggering ftrace_disabled. The generation of
ftrace_disabled is beyond our control. This is related to the user. What
we can do is even if there are no additional derivative problems.
>
>> Signed-off-by: Ye Bin <yebin10@...wei.com>
>> ---
>> kernel/trace/ftrace.c | 3 ---
>> 1 file changed, 3 deletions(-)
>>
>> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
>> index a3d4dfad0cbc..ff5d9d73a4a7 100644
>> --- a/kernel/trace/ftrace.c
>> +++ b/kernel/trace/ftrace.c
>> @@ -7438,9 +7438,6 @@ void ftrace_release_mod(struct module *mod)
>>
>> mutex_lock(&ftrace_lock);
>>
>> - if (ftrace_disabled)
>> - goto out_unlock;
>> -
>
> Here you delete the check, and the next patch you have:
>
> + if (ftrace_disabled || (mod && !mod->num_ftrace_callsites)) {
> + mutex_unlock(&ftrace_lock);
> + return;
> + }
> +
>
The second patch I added judgment when initializing 'mod_map' in
ftrace_free_mem(). The first patch removes the judgment when
ftrace_release_mod() releases'mod_map'. The logic modified by the two
patches is isolated.
> Why the two patches where the second patch just adds back the check and
> then adds some more stuff around it. This should be a single patch.
>
> Also, why not just keep the goto unlock, that has:
>
The ftrace_free_mem() function itself looks a little strange. It is easy
to misunderstand that it is a release function, but it is actually an
initialization function. My two patches did not modify the same function.
> out_unlock:
> mutex_unlock(&ftrace_lock);
>
> /* Need to synchronize with ftrace_location_range() */
> if (tmp_page)
> synchronize_rcu();
> for (pg = tmp_page; pg; pg = tmp_page) {
>
> /* Needs to be called outside of ftrace_lock */
> clear_mod_from_hashes(pg);
>
> if (pg->records) {
> free_pages((unsigned long)pg->records, pg->order);
> ftrace_number_of_pages -= 1 << pg->order;
> }
> tmp_page = pg->next;
> kfree(pg);
> ftrace_number_of_groups--;
> }
> }
>
> And tmp_page is set to NULL before that jump, so the if and for loop will
> both be nops.
>
> Why all this extra churn?
>
> -- Steve
>
>
>> list_for_each_entry_safe(mod_map, n, &ftrace_mod_maps, list) {
>> if (mod_map->mod == mod) {
>> list_del_rcu(&mod_map->list);
>
Powered by blists - more mailing lists