[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZrJ2yKrKuWOscRpf@slm.duckdns.org>
Date: Tue, 6 Aug 2024 09:17:28 -1000
From: Tejun Heo <tj@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org, David Vernet <void@...ifault.com>,
Ingo Molnar <mingo@...hat.com>, Alexei Starovoitov <ast@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [GIT PULL] sched_ext: Initial pull request for v6.11
Hello,
On Tue, Aug 06, 2024 at 10:27:16AM +0200, Peter Zijlstra wrote:
...
> > When !CONFIG_PREEMPTION, double_lock_balance() seems cheaper than unlocking
> > and locking unconditionally. Because SCX schedulers can do a lot more hot
> > migrations, I thought it'd be better to go that way. I haven't actually
> > measured anything tho, so I could be wrong.
>
> So I think the theory is something like this.
>
> If you take a spinlock, you wait-time W is N times the hold-time H,
> where the hold-time is avg/max (depending on your analysis goals) time
> you hold the lock for, and N is the contention level or number of
> waiters etc.
>
> Now, when you go nest locks, your hold-time increases with the wait-time
> of the nested lock. In this case, since it's the 'same' lock, your
> hold-time gets a recursive wait-time term, that is: H'=H+N*H.
>
> This blows up your wait-time, which makes contention worse. Because what
> was W=N*H then becomes W=N*(N*H).
Thanks for the explanation. Much appreaciated.
> Anyway, at the time we saw great benefits from moving away from the
> double-lock thing, it might be worth looking into when/if you see
> significant lock contention; because obviously if the locks are not
> contended it all doesn't matter.
I think we *may* have seen this in action on a NUMA machine running a
scheduler which doesn't have topology awareness and thus was migrating tasks
across the boundary frequently. I'll see if I can reproduce it and whether
getting rid of the double locking improves the situation.
Thanks.
--
tejun
Powered by blists - more mailing lists