[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABk29Ntop2nX+z1bV7giG8ToR_w3f_+GYGAw+hFQ6g9rCZunmw@mail.gmail.com>
Date: Fri, 23 Apr 2021 18:22:52 -0700
From: Josh Don <joshdon@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Joel Fernandes <joel@...lfernandes.org>,
"Hyser,Chris" <chris.hyser@...cle.com>,
Ingo Molnar <mingo@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Valentin Schneider <valentin.schneider@....com>,
Mel Gorman <mgorman@...e.de>,
linux-kernel <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 04/19] sched: Prepare for Core-wide rq->lock
Hi Peter,
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -186,12 +186,37 @@ int sysctl_sched_rt_runtime = 950000;
>
> void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
> {
> - raw_spin_lock_nested(rq_lockp(rq), subclass);
> + raw_spinlock_t *lock;
> +
> + if (sched_core_disabled()) {
Nothing to stop sched_core from being enabled right here? Leading to
us potentially taking the wrong lock.
> + raw_spin_lock_nested(&rq->__lock, subclass);
> + return;
> + }
> +
> + for (;;) {
> + lock = rq_lockp(rq);
> + raw_spin_lock_nested(lock, subclass);
> + if (likely(lock == rq_lockp(rq)))
> + return;
> + raw_spin_unlock(lock);
> + }
> }
Powered by blists - more mailing lists