[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251231045948.77624-1-sj@kernel.org>
Date: Tue, 30 Dec 2025 20:59:46 -0800
From: SeongJae Park <sj@...nel.org>
To: JaeJoon Jung <rgbi3307@...il.com>
Cc: SeongJae Park <sj@...nel.org>,
Asier Gutierrez <gutierrez.asier@...wei-partners.com>,
akpm@...ux-foundation.org,
damon@...ts.linux.dev,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
wangkefeng.wang@...wei.com,
artem.kuzin@...wei.com,
stepanov.anatoly@...wei.com
Subject: Re: [RFC PATCH v1] mm: improve call_controls_lock
On Wed, 31 Dec 2025 11:15:00 +0900 JaeJoon Jung <rgbi3307@...il.com> wrote:
> On Tue, 30 Dec 2025 at 00:23, SeongJae Park <sj@...nel.org> wrote:
> >
> > Hello Asier,
> >
> >
> > Thank you for sending this patch!
> >
> > On Mon, 29 Dec 2025 14:55:32 +0000 Asier Gutierrez <gutierrez.asier@...wei-partners.com> wrote:
> >
> > > This is a minor patch set for a call_controls_lock synchronization improvement.
> >
> > Please break description lines to not exceed 75 characters per line.
> >
> > >
> > > Spinlocks are faster than mutexes, even when the mutex takes the fast
> > > path. Hence, this patch replaces the mutex call_controls_lock with a spinlock.
> >
> > But call_controls_lock is not being used on performance critical part.
> > Actually, most of DAMON code is not performance critical. I really appreciate
> > your patch, but I have to say I don't think this change is really needed now.
> > Please let me know if I'm missing something.
>
> Paradoxically, when it comes to locking, spin_lock is better than
> mutex_lock
> because "most of DAMON code is not performance critical."
>
> DAMON code only accesses the ctx belonging to kdamond itself. For
> example:
> kdamond.0 --> ctx.0
> kdamond.1 --> ctx.1
> kdamond.2 --> ctx.2
> kdamond.# --> ctx.#
>
> There is no cross-approach as shown below:
> kdamond.0 --> ctx.1
> kdamond.1 --> ctx.2
> kdamond.2 --> ctx.0
>
> Only the data belonging to kdamond needs to be resolved for concurrent access.
> most DAMON code needs to lock/unlock briefly when add/del linked
> lists,
> so spin_lock is effective.
I don't disagree this. Both spinlock and mutex effectively work for DAMON's
locking usages.
> If you handle it with a mutex, it becomes
> more
> complicated because the rescheduling occurs as a context switch occurs
> inside the kernel.
Can you please elaborate what kind of complexities you are saying about?
Adding some examples would be nice.
> Moreover, since the call_controls_lock that is
> currently
> being raised as a problem only occurs in two places, the kdamon_call()
> loop
> and the damon_call() function, it is effective to handle it with a
> spin_lock
> as shown below.
>
> @@ -1502,14 +1501,15 @@ int damon_call(struct damon_ctx *ctx, struct
> damon_call_control *control)
> control->canceled = false;
> INIT_LIST_HEAD(&control->list);
>
> - mutex_lock(&ctx->call_controls_lock);
> + spin_lock(&ctx->call_controls_lock);
> + /* damon_is_running */
> if (ctx->kdamond) {
> list_add_tail(&control->list, &ctx->call_controls);
> } else {
> - mutex_unlock(&ctx->call_controls_lock);
> + spin_unlock(&ctx->call_controls_lock);
> return -EINVAL;
> }
> - mutex_unlock(&ctx->call_controls_lock);
> + spin_unlock(&ctx->call_controls_lock);
>
> if (control->repeat)
> return 0;
Are you saying the above diff can fix the damon_call() use-after-free bug [1]?
Can you please elaborate why you think so?
[1] https://lore.kernel.org/20251231012315.75835-1-sj@kernel.org
Thanks,
SJ
[...]
Powered by blists - more mailing lists