[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230412141053.59498-1-ligang.bdlg@bytedance.com>
Date: Wed, 12 Apr 2023 22:10:52 +0800
From: Gang Li <ligang.bdlg@...edance.com>
To: John Hubbard <jhubbard@...dia.com>,
Jonathan Corbet <corbet@....net>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>
Cc: linux-api@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-doc@...r.kernel.org,
Gang Li <ligang.bdlg@...edance.com>
Subject: [PATCH v6 1/2] sched/numa: use static_branch_inc/dec for sched_numa_balancing
per-process numa balancing use static_branch_inc/dec() to count
how many enables in sched_numa_balancing. So here must be converted
to inc/dec too.
Cc: linux-api@...r.kernel.org
Signed-off-by: Gang Li <ligang.bdlg@...edance.com>
Acked-by: John Hubbard <jhubbard@...dia.com>
---
kernel/sched/core.c | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 94be4eebfa53..99cc1d5821a1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4501,21 +4501,15 @@ DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
int sysctl_numa_balancing_mode;
-static void __set_numabalancing_state(bool enabled)
-{
- if (enabled)
- static_branch_enable(&sched_numa_balancing);
- else
- static_branch_disable(&sched_numa_balancing);
-}
-
void set_numabalancing_state(bool enabled)
{
- if (enabled)
+ if (enabled) {
sysctl_numa_balancing_mode = NUMA_BALANCING_NORMAL;
- else
+ static_branch_enable(&sched_numa_balancing);
+ } else {
sysctl_numa_balancing_mode = NUMA_BALANCING_DISABLED;
- __set_numabalancing_state(enabled);
+ static_branch_disable(&sched_numa_balancing);
+ }
}
#ifdef CONFIG_PROC_SYSCTL
@@ -4549,8 +4543,14 @@ static int sysctl_numa_balancing(struct ctl_table *table, int write,
if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) &&
(state & NUMA_BALANCING_MEMORY_TIERING))
reset_memory_tiering();
- sysctl_numa_balancing_mode = state;
- __set_numabalancing_state(state);
+ if (sysctl_numa_balancing_mode != state) {
+ if (state == NUMA_BALANCING_DISABLED)
+ static_branch_dec(&sched_numa_balancing);
+ else if (sysctl_numa_balancing_mode == NUMA_BALANCING_DISABLED)
+ static_branch_inc(&sched_numa_balancing);
+
+ sysctl_numa_balancing_mode = state;
+ }
}
return err;
}
--
2.20.1
Powered by blists - more mailing lists