[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZyqkCbC9NPBxsPU1@slm.duckdns.org>
Date: Tue, 5 Nov 2024 13:02:33 -1000
From: Tejun Heo <tj@...nel.org>
To: David Vernet <void@...ifault.com>
Cc: linux-kernel@...r.kernel.org, kernel-team@...a.com, sched-ext@...a.com,
Andrea Righi <arighi@...dia.com>,
Changwoo Min <multics69@...il.com>
Subject: Re: [PATCH sched_ext/for-6.13 1/2] sched_ext: Avoid live-locking
bypass mode switching
Hello,
On Tue, Nov 05, 2024 at 04:03:46PM -0600, David Vernet wrote:
> On Tue, Nov 05, 2024 at 11:48:11AM -1000, Tejun Heo wrote:
>
> [...]
>
> > static bool consume_dispatch_q(struct rq *rq, struct scx_dispatch_q *dsq)
> > {
> > struct task_struct *p;
> > retry:
> > /*
> > + * This retry loop can repeatedly race against scx_ops_bypass()
> > + * dequeueing tasks from @dsq trying to put the system into the bypass
> > + * mode. On some multi-socket machines (e.g. 2x Intel 8480c), this can
> > + * live-lock the machine into soft lockups. Give a breather.
> > + */
> > + scx_ops_breather(rq);
>
> Should we move this to after the list_empty() check? Or before the goto retry
> below so we can avoid having to do the atomic read on the typical hotpath?
I don't think there's going to be a measurable difference in terms of
overhead. It's going to be a cached unlikely jump in most cases and there's
value in catching the CPUs in the breather as soon as possible in case even
if the DSQ currently targeted is empty as it's difficult to reliably predict
the different lockup scenarios.
Thanks.
--
tejun
Powered by blists - more mailing lists