[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAHOvCC6F5zLnBF=v7G5k1WdDZZmkBAK94ixzLiPF0W53wdtyeA@mail.gmail.com>
Date: Wed, 31 Dec 2025 14:27:54 +0900
From: JaeJoon Jung <rgbi3307@...il.com>
To: SeongJae Park <sj@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, "# 6 . 14 . x" <stable@...r.kernel.org>,
damon@...ts.linux.dev, linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm/damon/core: remove call_control in inactive contexts
On Wed, 31 Dec 2025 at 10:25, SeongJae Park <sj@...nel.org> wrote:
>
> On Mon, 29 Dec 2025 19:45:14 -0800 SeongJae Park <sj@...nel.org> wrote:
>
> > On Mon, 29 Dec 2025 18:41:28 -0800 SeongJae Park <sj@...nel.org> wrote:
> >
> > > On Mon, 29 Dec 2025 17:45:30 -0800 SeongJae Park <sj@...nel.org> wrote:
> > >
> > > > On Sun, 28 Dec 2025 10:31:01 -0800 SeongJae Park <sj@...nel.org> wrote:
> > > >
> > [...]
> > > > I will send a new version of this fix soon.
> > >
> > > So far, I got two fixup ideas.
> > >
> > > The first one is keeping the current code as is, and additionally modifying
> > > kdamond_call() to protect all call_control object accesses under
> > > ctx->call_controls_lock protection.
> > >
> > > The second one is reverting this patch, and doing the DAMON running status
> > > check before adding the damon_call_control object, but releasing the
> > > kdamond_lock after the object insertion is done.
> > >
> > > I'm in favor of the second one at the moment, as it seems more simple.
> >
> > I don't really like both approaches because those implicitly add locking rules.
> > If the first approach is taken, damon_call() callers should aware they should
> > not register callback functions that can hold call_controls_lock. If the
> > second approach is taken, we should avoid holding kdamond_lock while holding
> > damon_call_control lock. The second implicit rule seems easier to keep to me,
> > but I want to avoid that if possible.
> >
> > The third idea I just got is, keeping this patch as is, and moving the final
> > kdamond_call() invocation to be made _before_ the ctx->kdamond reset. That
> > removes the race condition between the final kdamond_call() and
> > damon_call_handle_inactive_ctx(), without introducing new locking rules.
>
> I just posted the v2 [1] with the third idea.
>
> [1] https://lore.kernel.org/20251231012315.75835-1-sj@kernel.org
I generally agree with what you've said so far. However, it's inefficient
to continue executing damon_call_handle_inactive_ctx() while kdamond is
"off". There's no need to add a new damon_call_handle_inactive_ctx()
function. As shown below, it's better to call list_add only when kdamond
is "on" (true), and then use the existing code to end with
kdamond_call(ctx, true) when kdamond is "off."
+static void kdamond_call(struct damon_ctx *ctx, bool cancel);
+
/**
* damon_call() - Invoke a given function on DAMON worker thread (kdamond).
* @ctx: DAMON context to call the function for.
@@ -1496,14 +1475,17 @@ int damon_call(struct damon_ctx *ctx, struct
damon_call_control *control)
control->canceled = false;
INIT_LIST_HEAD(&control->list);
- if (damon_is_running(ctx)) {
- mutex_lock(&ctx->call_controls_lock);
+ mutex_lock(&ctx->call_controls_lock);
+ if (ctx->kdamond) {
list_add_tail(&control->list, &ctx->call_controls);
- mutex_unlock(&ctx->call_controls_lock);
} else {
- /* return damon_call_handle_inactive_ctx(ctx, control); */
+ mutex_unlock(&ctx->call_controls_lock);
+ if (!list_empty_careful(&ctx->call_controls))
+ kdamond_call(ctx, true);
return -EINVAL;
}
+ mutex_unlock(&ctx->call_controls_lock);
+
if (control->repeat)
return 0;
wait_for_completion(&control->completion);
Thanks,
JaeJoon
>
>
> Thanks,
> SJ
>
> [...]
Powered by blists - more mailing lists