lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <d06455426aecadd7c7751ed0a5bd24d1bd54422c.1594062828.git.yu.c.chen@intel.com>
Date:   Tue,  7 Jul 2020 03:36:41 +0800
From:   Chen Yu <yu.c.chen@...el.com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Valentin Schneider <valentin.schneider@....com>
Cc:     Vincent Guittot <vincent.guittot@...aro.org>,
        Ingo Molnar <mingo@...hat.com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        linux-kernel@...r.kernel.org, Chen Yu <yu.c.chen@...el.com>
Subject: [PATCH 1/2][RFC] sched/topology: Add update_domain_cpu()

Introduce update_domain_cpu(), which is a wrapper of
update_top_cache_domain(). In update_domain_cpu() the
cpu hotplug lock is to protect against the rebuild of
sched domain, and the rcu read lock is to protect against
the dereference of domain tree(rq->sd) in update_top_cache_domain().
This patch is to prepare for the next patch to update the
flags of sched domain via sysctl.

No intentional functional impact.

Signed-off-by: Chen Yu <yu.c.chen@...el.com>
---
 include/linux/sched/topology.h |  5 +++++
 kernel/sched/topology.c        | 11 +++++++++++
 2 files changed, 16 insertions(+)

diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index fb11091129b3..dc81736090e3 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -161,6 +161,7 @@ cpumask_var_t *alloc_sched_domains(unsigned int ndoms);
 void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms);
 
 bool cpus_share_cache(int this_cpu, int that_cpu);
+void update_domain_cpu(int cpu);
 
 typedef const struct cpumask *(*sched_domain_mask_f)(int cpu);
 typedef int (*sched_domain_flags_f)(void);
@@ -214,6 +215,10 @@ static inline bool cpus_share_cache(int this_cpu, int that_cpu)
 	return true;
 }
 
+static inline void update_domain_cpu(int cpu)
+{
+}
+
 #endif	/* !CONFIG_SMP */
 
 #ifndef arch_scale_cpu_capacity
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index ba81187bb7af..495b65367a12 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -651,6 +651,17 @@ static void update_top_cache_domain(int cpu)
 	rcu_assign_pointer(per_cpu(sd_asym_cpucapacity, cpu), sd);
 }
 
+void update_domain_cpu(int cpu)
+{
+	/* Protect against sched domain rebuild. */
+	get_online_cpus();
+	/* Guard read-side sched domain dereference. */
+	rcu_read_lock();
+	update_top_cache_domain(cpu);
+	rcu_read_unlock();
+	put_online_cpus();
+}
+
 /*
  * Attach the domain 'sd' to 'cpu' as its base domain. Callers must
  * hold the hotplug lock.
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ