[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250224233808.5jubn7icui5r2ufz@airbuntu>
Date: Mon, 24 Feb 2025 23:38:08 +0000
From: Qais Yousef <qyousef@...alina.io>
To: Xuewen Yan <xuewen.yan94@...il.com>
Cc: Xuewen Yan <xuewen.yan@...soc.com>, mingo@...hat.com,
peterz@...radead.org, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
vschneid@...hat.com, ke.wang@...soc.com, di.shen@...soc.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] sched/uclamp: Add uclamp_is_used() check before
enable it
On 02/24/25 09:55, Xuewen Yan wrote:
> Hi Qais,
>
> On Sun, Feb 23, 2025 at 7:36 AM Qais Yousef <qyousef@...alina.io> wrote:
> >
> > On 02/13/25 17:15, Xuewen Yan wrote:
> > > Because the static_branch_enable() would get the cpus_read_lock(),
> > > and sometimes users may frequently set the uclamp value of tasks,
> > > and the uclamp_validate() would call the static_branch_enable()
> > > frequently, so add the uclamp_is_used() check to prevent calling
> > > the cpus_read_lock() frequently.
> >
> > FWIW original patch was doing such check but it was taken out after review
> > comments.
> >
> > Is something like below completely broken instead? I think uclamp usage isn't
> > unique but haven't really audited the code to see if there are similar users.
> >
> > I think it is a valid pattern to allow and the expectation was there shouldn't
> > be side effect of calling this repeatedly.
> >
> > Good catch by the way.
> >
> > --->8---
> >
> > diff --git a/kernel/jump_label.c b/kernel/jump_label.c
> > index d9c822bbffb8..17583c98c447 100644
> > --- a/kernel/jump_label.c
> > +++ b/kernel/jump_label.c
> > @@ -214,6 +214,13 @@ EXPORT_SYMBOL_GPL(static_key_enable_cpuslocked);
> >
> > void static_key_enable(struct static_key *key)
> > {
> > + STATIC_KEY_CHECK_USE(key);
> > +
> > + if (atomic_read(&key->enabled) > 0) {
> > + WARN_ON_ONCE(atomic_read(&key->enabled) != 1);
> > + return;
> > + }
> > +
> > cpus_read_lock();
> > static_key_enable_cpuslocked(key);
> > cpus_read_unlock();
> > @@ -239,6 +246,13 @@ EXPORT_SYMBOL_GPL(static_key_disable_cpuslocked);
> >
> > void static_key_disable(struct static_key *key)
> > {
> > + STATIC_KEY_CHECK_USE(key);
> > +
> > + if (atomic_read(&key->enabled) > 0) {
> > + WARN_ON_ONCE(atomic_read(&key->enabled) != 1);
> > + return;
> > + }
> > +
> > cpus_read_lock();
> > static_key_disable_cpuslocked(key);
> > cpus_read_unlock();
> >
> > --->8---
>
> I don't think we should do it this way.
> Uclamp can do this because it has never been disabled after being enabled.
> However, for others, they might frequently enable and disable it.
> If we don't add a lock here, there could be concurrency issues due to
> potential race conditions.
I can't see why this is special to uclamp because it is not disabled. The
problem is that we enable unconditionally because the logic in the jump_label
should know that this call is redundant. And that was the feedback from Peter
then as I had it exactly as you're trying to do now initially. It just it seems
we do the bail out check after holding the lock, we can replicate this
condition before holding the lock so if there are double, triple or N'ble calls
to static_key_enable() they'll just end up doing nothing without any side
effect of holding the lock. The fact that it gets disabled doesn't relate to
the problem that the static_key_enable() is being called repeatedly is what is
causing the problem here. And the solution is to not hold the lock. I think the
atomic_t variable is enough to do early bail out without holding any locks.
What are the concurrency issues you have in mind? And why the frequent disable
is a problem to _replicate_ the early bailout logic outside of the lock?
If you have in mind a concurrent enable/disable, then I'd say the calling code
has a bigger problem for doing concurrent enable/disable. This doesn't make
sense and the logic will be accidentally enabled one time and disabled other
times depending on the planet order, and moving the guard for early bail out
out of the lock wouldn't make this any worse, no?
>
> ---
> By the way, I sincerely apologize for forgetting to add you when I
> sent the patch-v2 and patch-v3 emails.
>
> V2: https://lore.kernel.org/all/20250219093747.2612-2-xuewen.yan@unisoc.com/
> V3: https://lore.kernel.org/all/20250220055950.4405-2-xuewen.yan@unisoc.com/
np, I'll put a comment there too.
Thanks!
--
Qais Yousef
Powered by blists - more mailing lists