[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z5SiY0WNvEHXkrv5@gpd3>
Date: Sat, 25 Jan 2025 09:35:47 +0100
From: Andrea Righi <arighi@...dia.com>
To: Tejun Heo <tj@...nel.org>, David Vernet <void@...ifault.com>,
Changwoo Min <changwoo@...lia.com>
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] sched_ext: Fix lock imbalance in
dispatch_to_local_dsq()
On Sat, Jan 25, 2025 at 07:56:08AM +0100, Andrea Righi wrote:
...
> @@ -2557,6 +2567,7 @@ static void dispatch_to_local_dsq(struct rq *rq, struct scx_dispatch_q *dst_dsq,
> {
> struct rq *src_rq = task_rq(p);
> struct rq *dst_rq = container_of(dst_dsq, struct rq, scx.local_dsq);
> + struct rq *locked_rq = rq;
I just noticed that we have an unused variable here with !CONFIG_SMP, so
ignore this. I'll send a new version soon.
Sorry for the noise.
-Andrea
>
> /*
> * We're synchronized against dequeue through DISPATCHING. As @p can't
> @@ -2593,12 +2604,16 @@ static void dispatch_to_local_dsq(struct rq *rq, struct scx_dispatch_q *dst_dsq,
> atomic_long_set_release(&p->scx.ops_state, SCX_OPSS_NONE);
>
> /* switch to @src_rq lock */
> - if (rq != src_rq) {
> - raw_spin_rq_unlock(rq);
> + if (locked_rq != src_rq) {
> + raw_spin_rq_unlock(locked_rq);
> + locked_rq = src_rq;
> raw_spin_rq_lock(src_rq);
> }
>
> - /* task_rq couldn't have changed if we're still the holding cpu */
> + /*
> + * If p->scx.holding_cpu still matches the current CPU, task_rq(p)
> + * has not changed and we can safely move the task to @dst_rq.
> + */
> if (likely(p->scx.holding_cpu == raw_smp_processor_id()) &&
> !WARN_ON_ONCE(src_rq != task_rq(p))) {
> /*
Powered by blists - more mailing lists