[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID:
<GV1PR08MB10521BCB90DD275E324622DA0FB61A@GV1PR08MB10521.eurprd08.prod.outlook.com>
Date: Fri, 30 May 2025 11:27:48 +0000
From: Yeo Reum Yun <YeoReum.Yun@....com>
To: Byungchul Park <byungchul@...com>
CC: "kernel_team@...ynix.com" <kernel_team@...ynix.com>,
"linux-ide@...r.kernel.org" <linux-ide@...r.kernel.org>,
"kernel-team@....com" <kernel-team@....com>, "open list:MEMORY MANAGEMENT"
<linux-mm@...ck.org>, "harry.yoo@...cle.com" <harry.yoo@...cle.com>,
"yskelg@...il.com" <yskelg@...il.com>, "her0gyugyu@...il.com"
<her0gyugyu@...il.com>, "max.byungchul.park@...il.com"
<max.byungchul.park@...il.com>, Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: [RFC DEPT v16] Question for dept.
Hi Byungchul,
Thanks for your great work for the latest dept patch.
But I have a some quetions with the below dept log supplied from
Yunseong Kim<yskelg@...il.com>
...
[13304.604203] context A
[13304.604209] [S] lock(&uprobe->register_rwsem:0)
[13304.604217] [W] __wait_rcu_gp(<sched>:0)
[13304.604226] [E] unlock(&uprobe->register_rwsem:0)
[13304.604234]
[13304.604239] context B
[13304.604244] [S] lock(event_mutex:0)
[13304.604252] [W] lock(&uprobe->register_rwsem:0)
[13304.604261] [E] unlock(event_mutex:0)
[13304.604269]
[13304.604274] context C
[13304.604279] [S] lock(&ctx->mutex:0)
[13304.604287] [W] lock(event_mutex:0)
[13304.604295] [E] unlock(&ctx->mutex:0)
[13304.604303]
[13304.604308] context D
[13304.604313] [S] lock(&sig->exec_update_lock:0)
[13304.604322] [W] lock(&ctx->mutex:0)
[13304.604330] [E] unlock(&sig->exec_update_lock:0)
[13304.604338]
[13304.604343] context E
[13304.604348] [S] lock(&f->f_pos_lock:0)
[13304.604356] [W] lock(&sig->exec_update_lock:0)
[13304.604365] [E] unlock(&f->f_pos_lock:0)
[13304.604373]
[13304.604378] context F
[13304.604383] [S] (unknown)(<sched>:0)
[13304.604391] [W] lock(&f->f_pos_lock:0)
[13304.604399] [E] try_to_wake_up(<sched>:0)
[13304.604408]
[13304.604413] context G
[13304.604418] [S] lock(btrfs_trans_num_writers:0)
[13304.604427] [W] btrfs_commit_transaction(<sched>:0)
[13304.604436] [E] unlock(btrfs_trans_num_writers:0)
[13304.604445]
[13304.604449] context H
[13304.604455] [S] (unknown)(<sched>:0)
[13304.604463] [W] lock(btrfs_trans_num_writers:0)
[13304.604471] [E] try_to_wake_up(<sched>:0)
[13304.604484] context I
[13304.604490] [S] (unknown)(<sched>:0)
[13304.604498] [W] synchronize_rcu_expedited_wait_once(<sched>:0)
[13304.604507] --------------- >8 timeout ---------------
[13304.604527] context J
[13304.604533] [S] (unknown)(<sched>:0)
[13304.604541] [W] synchronize_rcu_expedited(<sched>:0)
[13304.604549] [E] try_to_wake_up(<sched>:0)
[end of circular]
...
1. I wonder how context A could be printed with
[13304.604217] [W] __wait_rcu_gp(<sched>:0)
since, the completion's dept map will be initailized with
sdt_might_sleep_start_timeout((x)->dmap, -1L);
I think last dept_task's stage_sched_map affects this wrong print.
Should this be fixed with:
@@ -2713,6 +2713,7 @@ void dept_stage_wait(struct dept_map *m, struct dept_key *k,
if (m) {
dt->stage_m = *m;
dt->stage_real_m = m;
+ dt->stage_sched_map = false;
/*
* Ensure dt->stage_m.keys != NULL and it works with the
2. Whenever prints the dependency which initalized with sdt_might_sleep_start_timeout() currently it prints
(unknown)(<sched>:0) only.
Would it much better to print task information? (pid, comm and other).
Thanks.
Powered by blists - more mailing lists