[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zer1Hkxh/UMxs3xs@gmail.com>
Date: Fri, 8 Mar 2024 12:23:10 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Shrikanth Hegde <sshegde@...ux.ibm.com>
Cc: linux-kernel@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Valentin Schneider <vschneid@...hat.com>
Subject: Re: [PATCH 1/9] sched/balancing: Switch the
'DEFINE_SPINLOCK(balancing)' spinlock into an 'atomic_t
sched_balance_running' flag
* Shrikanth Hegde <sshegde@...ux.ibm.com> wrote:
> system is at 75% load <-- 25.6% contention
> 113K probe:rebalance_domains_L37
> 84K probe:rebalance_domains_L55
>
> 87
> system is at 100% load <-- 87.5% contention.
> 64K probe:rebalance_domains_L37
> 8K probe:rebalance_domains_L55
>
>
> A few reasons for contentions could be:
>
> 1. idle load balance is running and some other cpu is becoming idle, and
> tries newidle_balance.
>
> 2. when system is busy, every CPU would do busy balancing, it would
> contend for the lock. It will not do balance as should_we_balance says
> this CPU need not balance. It bails out and release the lock.
Thanks, these measurements are really useful!
Would it be possible to disambiguate these two cases?
I think we should probably do something about this contention on this large
system: especially if #2 'no work to be done' bailout is the common case.
Thanks,
Ingo
Powered by blists - more mailing lists