[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241105220346.GA64119@maniforge>
Date: Tue, 5 Nov 2024 16:03:46 -0600
From: David Vernet <void@...ifault.com>
To: Tejun Heo <tj@...nel.org>
Cc: linux-kernel@...r.kernel.org, kernel-team@...a.com, sched-ext@...a.com,
Andrea Righi <arighi@...dia.com>,
Changwoo Min <multics69@...il.com>
Subject: Re: [PATCH sched_ext/for-6.13 1/2] sched_ext: Avoid live-locking
bypass mode switching
On Tue, Nov 05, 2024 at 11:48:11AM -1000, Tejun Heo wrote:
[...]
> static bool consume_dispatch_q(struct rq *rq, struct scx_dispatch_q *dsq)
> {
> struct task_struct *p;
> retry:
> /*
> + * This retry loop can repeatedly race against scx_ops_bypass()
> + * dequeueing tasks from @dsq trying to put the system into the bypass
> + * mode. On some multi-socket machines (e.g. 2x Intel 8480c), this can
> + * live-lock the machine into soft lockups. Give a breather.
> + */
> + scx_ops_breather(rq);
Should we move this to after the list_empty() check? Or before the goto retry
below so we can avoid having to do the atomic read on the typical hotpath?
> +
> + /*
> * The caller can't expect to successfully consume a task if the task's
> * addition to @dsq isn't guaranteed to be visible somehow. Test
> * @dsq->list without locking and skip if it seems empty.
> @@ -4550,10 +4587,11 @@ bool task_should_scx(struct task_struct
> */
> static void scx_ops_bypass(bool bypass)
> {
> + static DEFINE_RAW_SPINLOCK(bypass_lock);
> int cpu;
> unsigned long flags;
>
> - raw_spin_lock_irqsave(&__scx_ops_bypass_lock, flags);
> + raw_spin_lock_irqsave(&bypass_lock, flags);
> if (bypass) {
> scx_ops_bypass_depth++;
> WARN_ON_ONCE(scx_ops_bypass_depth <= 0);
> @@ -4566,6 +4604,8 @@ static void scx_ops_bypass(bool bypass)
> goto unlock;
> }
>
> + atomic_inc(&scx_ops_breather_depth);
> +
> /*
> * No task property is changing. We just need to make sure all currently
> * queued tasks are re-queued according to the new scx_rq_bypassing()
> @@ -4621,8 +4661,10 @@ static void scx_ops_bypass(bool bypass)
> /* resched to restore ticks and idle state */
> resched_cpu(cpu);
> }
> +
> + atomic_dec(&scx_ops_breather_depth);
> unlock:
> - raw_spin_unlock_irqrestore(&__scx_ops_bypass_lock, flags);
> + raw_spin_unlock_irqrestore(&bypass_lock, flags);
> }
>
> static void free_exit_info(struct scx_exit_info *ei)
> @@ -6275,6 +6317,13 @@ static bool scx_dispatch_from_dsq(struct
> raw_spin_rq_lock(src_rq);
> }
>
> + /*
> + * If the BPF scheduler keeps calling this function repeatedly, it can
> + * cause similar live-lock conditions as consume_dispatch_q(). Insert a
> + * breather if necessary.
> + */
> + scx_ops_breather(src_rq);
> +
> locked_rq = src_rq;
> raw_spin_lock(&src_dsq->lock);
>
>
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists