[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250602025906.GA67804@system.software.com>
Date: Mon, 2 Jun 2025 11:59:06 +0900
From: Byungchul Park <byungchul@...com>
To: Yeo Reum Yun <YeoReum.Yun@....com>
Cc: "kernel_team@...ynix.com" <kernel_team@...ynix.com>,
"linux-ide@...r.kernel.org" <linux-ide@...r.kernel.org>,
"kernel-team@....com" <kernel-team@....com>,
"open list:MEMORY MANAGEMENT" <linux-mm@...ck.org>,
"harry.yoo@...cle.com" <harry.yoo@...cle.com>,
"yskelg@...il.com" <yskelg@...il.com>,
"her0gyugyu@...il.com" <her0gyugyu@...il.com>,
"max.byungchul.park@...il.com" <max.byungchul.park@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [RFC DEPT v16] Question for dept.
On Fri, May 30, 2025 at 11:27:48AM +0000, Yeo Reum Yun wrote:
> Hi Byungchul,
>
> Thanks for your great work for the latest dept patch.
>
> But I have a some quetions with the below dept log supplied from
> Yunseong Kim<yskelg@...il.com>
>
> ...
> [13304.604203] context A
> [13304.604209] [S] lock(&uprobe->register_rwsem:0)
> [13304.604217] [W] __wait_rcu_gp(<sched>:0)
> [13304.604226] [E] unlock(&uprobe->register_rwsem:0)
> [13304.604234]
> [13304.604239] context B
> [13304.604244] [S] lock(event_mutex:0)
> [13304.604252] [W] lock(&uprobe->register_rwsem:0)
> [13304.604261] [E] unlock(event_mutex:0)
> [13304.604269]
> [13304.604274] context C
> [13304.604279] [S] lock(&ctx->mutex:0)
> [13304.604287] [W] lock(event_mutex:0)
> [13304.604295] [E] unlock(&ctx->mutex:0)
> [13304.604303]
> [13304.604308] context D
> [13304.604313] [S] lock(&sig->exec_update_lock:0)
> [13304.604322] [W] lock(&ctx->mutex:0)
> [13304.604330] [E] unlock(&sig->exec_update_lock:0)
> [13304.604338]
> [13304.604343] context E
> [13304.604348] [S] lock(&f->f_pos_lock:0)
> [13304.604356] [W] lock(&sig->exec_update_lock:0)
> [13304.604365] [E] unlock(&f->f_pos_lock:0)
> [13304.604373]
> [13304.604378] context F
> [13304.604383] [S] (unknown)(<sched>:0)
> [13304.604391] [W] lock(&f->f_pos_lock:0)
> [13304.604399] [E] try_to_wake_up(<sched>:0)
> [13304.604408]
> [13304.604413] context G
> [13304.604418] [S] lock(btrfs_trans_num_writers:0)
> [13304.604427] [W] btrfs_commit_transaction(<sched>:0)
> [13304.604436] [E] unlock(btrfs_trans_num_writers:0)
> [13304.604445]
> [13304.604449] context H
> [13304.604455] [S] (unknown)(<sched>:0)
> [13304.604463] [W] lock(btrfs_trans_num_writers:0)
> [13304.604471] [E] try_to_wake_up(<sched>:0)
> [13304.604484] context I
> [13304.604490] [S] (unknown)(<sched>:0)
> [13304.604498] [W] synchronize_rcu_expedited_wait_once(<sched>:0)
> [13304.604507] --------------- >8 timeout ---------------
> [13304.604527] context J
> [13304.604533] [S] (unknown)(<sched>:0)
> [13304.604541] [W] synchronize_rcu_expedited(<sched>:0)
> [13304.604549] [E] try_to_wake_up(<sched>:0)
What a long circle! Dept is working great!
However, this is a false positive that comes from rcu waits that haven't
been classified properly yet, the fix of which is in progress by
Yunseong Kim. We should wait for him to complete the fix :(
> [end of circular]
> ...
>
> 1. I wonder how context A could be printed with
> [13304.604217] [W] __wait_rcu_gp(<sched>:0)
> since, the completion's dept map will be initailized with
> sdt_might_sleep_start_timeout((x)->dmap, -1L);
>
> I think last dept_task's stage_sched_map affects this wrong print.
No. It's working as it should. Since (x)->dmap is NULL in this case,
it's supposed to print <sched>.
> Should this be fixed with:
>
> @@ -2713,6 +2713,7 @@ void dept_stage_wait(struct dept_map *m, struct dept_key *k,
> if (m) {
> dt->stage_m = *m;
> dt->stage_real_m = m;
> + dt->stage_sched_map = false;
It should already be false since sdt_might_sleep_end() reset this value
to false. DEPT_WARN_ON(dt->stage_sched_map) in here might make more
sense.
> /*
> * Ensure dt->stage_m.keys != NULL and it works with the
>
> 2. Whenever prints the dependency which initalized with sdt_might_sleep_start_timeout() currently it prints
> (unknown)(<sched>:0) only.
> Would it much better to print task information? (pid, comm and other).
Thanks for such a valuable feedback. I will add it to to-do.
Byungchul
>
> Thanks.
Powered by blists - more mailing lists