[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c086b5fb-681e-d104-1e11-873ed5444c5c@bytedance.com>
Date: Thu, 8 Dec 2022 17:07:52 +0800
From: Abel Wu <wuyun.abel@...edance.com>
To: chenying <chenying.kernel@...edance.com>, mingo@...hat.com,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Benjamin Segall <bsegall@...gle.com>
Cc: linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] Reduce rq lock contention in load_balance()
Hi Ying,
On 11/24/22 5:07 PM, chenying wrote:
> ...
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index a4a20046e586..384690bda8c3 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -954,6 +954,7 @@ struct balance_callback {
> struct rq {
> /* runqueue lock: */
> raw_spinlock_t __lock;
> + raw_spinlock_t lb_lock;
Do we really need a new lock for doing this? I may suggest the
following:
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 87522c3de7b2..30d84e066a9a 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1048,6 +1048,7 @@ struct rq {
struct balance_callback *balance_callback;
+ unsigned char balancing;
unsigned char nohz_idle_balance;
unsigned char idle_balance;
and skip in-balancing runqueues early when find_busiest_queue().
Thanks,
Abel
>
> /*
> * nr_running and cpu_load should be in the same cacheline because
Powered by blists - more mailing lists