lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 7 Sep 2018 23:40:30 +0200 From: Jan H. Schönherr <jschoenh@...zon.de> To: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org> Cc: Jan H. Schönherr <jschoenh@...zon.de>, linux-kernel@...r.kernel.org Subject: [RFC 43/60] cosched: Add for_each_sched_entity() variant for owned entities Add a new loop constuct for_each_owned_sched_entity(), which iterates over all owned scheduling entities, stopping when it encounters a leader change. This allows relatively straight-forward adaptations of existing code, where the leader only handles that part of the hierarchy it actually owns. Include some lockdep goodness, so that we detect incorrect usage. Signed-off-by: Jan H. Schönherr <jschoenh@...zon.de> --- kernel/sched/fair.c | 70 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f72a72c8c3b8..f55954e7cedc 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -521,6 +521,76 @@ find_matching_se(struct sched_entity **se, struct sched_entity **pse) static __always_inline void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec); +#ifdef CONFIG_COSCHEDULING +struct sched_entity *owned_parent_entity(struct sched_entity *se) +{ + struct rq *prq, *rq = hrq_of(cfs_rq_of(se)); + + lockdep_assert_held(&rq->lock); + + se = parent_entity(se); + if (!se) + return NULL; + + prq = hrq_of(cfs_rq_of(se)); + if (rq == prq) + return se; + +#ifdef CONFIG_SCHED_DEBUG + if (!rq->sdrq_data.parent_locked) { + int leader = rq->sdrq_data.leader; + int pleader = READ_ONCE(prq->sdrq_data.leader); + + SCHED_WARN_ON(leader == pleader); + } else { + int leader = rq->sdrq_data.leader; + int pleader = prq->sdrq_data.leader; + + SCHED_WARN_ON(leader != pleader); + } +#endif + + if (!rq->sdrq_data.parent_locked) + return NULL; + + lockdep_assert_held(&prq->lock); + return se; +} + +static inline int leader_of(struct sched_entity *se) +{ + struct rq *rq = hrq_of(cfs_rq_of(se)); + + lockdep_assert_held(&rq->lock); + return rq->sdrq_data.leader; +} + +static inline int __leader_of(struct sched_entity *se) +{ + struct rq *rq = hrq_of(cfs_rq_of(se)); + + return READ_ONCE(rq->sdrq_data.leader); +} +#else +struct sched_entity *owned_parent_entity(struct sched_entity *se) +{ + return parent_entity(se); +} + +static inline int leader_of(struct sched_entity *se) +{ + return cpu_of(rq_of(cfs_rq_of(se))); +} + +static inline int __leader_of(struct sched_entity *se) +{ + return leader_of(se); +} +#endif + +#define for_each_owned_sched_entity(se) \ + for (; se; se = owned_parent_entity(se)) + /************************************************************** * Scheduling class tree data structure manipulation methods: */ -- 2.9.3.1.gcba166c.dirty
Powered by blists - more mailing lists