[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250817152909.45567727@pumpkin>
Date: Sun, 17 Aug 2025 15:29:09 +0100
From: David Laight <david.laight.linux@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Kuba Piecuch <jpiecuch@...gle.com>, mingo@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com, joshdon@...gle.com,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/3] sched: add ability to throttle sched_yield()
calls to reduce contention
On Thu, 14 Aug 2025 16:53:08 +0200
Peter Zijlstra <peterz@...radead.org> wrote:
> On Mon, Aug 11, 2025 at 03:35:35PM +0200, Kuba Piecuch wrote:
> > On Mon, Aug 11, 2025 at 10:36 AM Peter Zijlstra <peterz@...radead.org> wrote:
...
> > The code calling sched_yield() was in the wait loop for a spinlock. It
> > would repeatedly yield until the compare-and-swap instruction succeeded
> > in acquiring the lock. This code runs in the SIGPROF handler.
>
> Well, then don't do that... userspace spinlocks are terrible, and
> bashing yield like that isn't helpful either.
All it takes is the kernel to take a hardware interrupt while your
'spin lock' is held and any other thread trying to acquire the
lock will sit at 100% cpu until all the interrupt work finishes.
A typical ethernet interrupt will schedule more work from a softint
context, with non-threaded napi you have to wait for that to finish.
That can all take milliseconds.
The same is true for a futex based lock - but at least the waiting
threads sleep.
Pretty much the only solution it to replace the userspace locks with
atomic operations (and hope the atomics make progress).
I'm pretty sure it only makes sense to have spin locks that disable
interrupts.
David
Powered by blists - more mailing lists