lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-eb7a74e6cd936c00749e2921b9e058631d986648@git.kernel.org>
Date:	Mon, 11 Apr 2011 14:41:37 GMT
From:	tip-bot for Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	linux-tip-commits@...r.kernel.org
Cc:	linux-kernel@...r.kernel.org, hpa@...or.com, mingo@...hat.com,
	torvalds@...ux-foundation.org, a.p.zijlstra@...llo.nl,
	efault@....de, npiggin@...nel.dk, akpm@...ux-foundation.org,
	tglx@...utronix.de, mingo@...e.hu
Subject: [tip:sched/domains] sched: Stuff the sched_domain creation in a data-structure

Commit-ID:  eb7a74e6cd936c00749e2921b9e058631d986648
Gitweb:     http://git.kernel.org/tip/eb7a74e6cd936c00749e2921b9e058631d986648
Author:     Peter Zijlstra <a.p.zijlstra@...llo.nl>
AuthorDate: Thu, 7 Apr 2011 14:10:00 +0200
Committer:  Ingo Molnar <mingo@...e.hu>
CommitDate: Mon, 11 Apr 2011 14:09:26 +0200

sched: Stuff the sched_domain creation in a data-structure

In order to make the topology contruction fully dynamic, remove the
still hard-coded list of possible domains and stick them in a
data-structure.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Mike Galbraith <efault@....de>
Cc: Nick Piggin <npiggin@...nel.dk>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Link: http://lkml.kernel.org/r/20110407122942.770335383@chello.nl
Signed-off-by: Ingo Molnar <mingo@...e.hu>
---
 kernel/sched.c |   32 ++++++++++++++++++++++++++------
 1 files changed, 26 insertions(+), 6 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 3ae1e02..f0e1821 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -6843,6 +6843,16 @@ enum s_alloc {
 	sa_none,
 };
 
+typedef struct sched_domain *(*sched_domain_build_f)(struct s_data *d,
+		const struct cpumask *cpu_map, struct sched_domain_attr *attr,
+		struct sched_domain *parent, int cpu);
+
+typedef const struct cpumask *(*sched_domain_mask_f)(int cpu);
+
+struct sched_domain_topology_level {
+	sched_domain_build_f build;
+};
+
 /*
  * Assumes the sched_domain tree is fully constructed
  */
@@ -7185,6 +7195,18 @@ static struct sched_domain *__build_smt_sched_domain(struct s_data *d,
 	return sd;
 }
 
+static struct sched_domain_topology_level default_topology[] = {
+	{ __build_allnodes_sched_domain, },
+	{ __build_node_sched_domain, },
+	{ __build_cpu_sched_domain, },
+	{ __build_book_sched_domain, },
+	{ __build_mc_sched_domain, },
+	{ __build_smt_sched_domain, },
+	{ NULL, },
+};
+
+static struct sched_domain_topology_level *sched_domain_topology = default_topology;
+
 /*
  * Build sched domains for a given set of cpus and attach the sched domains
  * to the individual cpus
@@ -7203,13 +7225,11 @@ static int build_sched_domains(const struct cpumask *cpu_map,
 
 	/* Set up domains for cpus specified by the cpu_map. */
 	for_each_cpu(i, cpu_map) {
+		struct sched_domain_topology_level *tl;
+
 		sd = NULL;
-		sd = __build_allnodes_sched_domain(&d, cpu_map, attr, sd, i);
-		sd = __build_node_sched_domain(&d, cpu_map, attr, sd, i);
-		sd = __build_cpu_sched_domain(&d, cpu_map, attr, sd, i);
-		sd = __build_book_sched_domain(&d, cpu_map, attr, sd, i);
-		sd = __build_mc_sched_domain(&d, cpu_map, attr, sd, i);
-		sd = __build_smt_sched_domain(&d, cpu_map, attr, sd, i);
+		for (tl = sched_domain_topology; tl->build; tl++)
+			sd = tl->build(&d, cpu_map, attr, sd, i);
 
 		*per_cpu_ptr(d.sd, i) = sd;
 	}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ