lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 01 Jan 2009 08:46:03 +0100 From: Mike Galbraith <efault@....de> To: Jayson King <dev@...sonking.com> Cc: linux-kernel@...r.kernel.org, a.p.zijlstra@...llo.nl, mingo@...e.hu Subject: [patch] Re: problem with "sched: revert back to per-rq vruntime"? Would perhaps be prettier to have the load already in place at call time, but methinks the enqueue/dequeue accounting logic is nice as is, so complete the unlikely case handling in an unlikely block. Impact: bug fixlet. Account for tasks which have not yet been enqueued in calc_delta_weight(). Signed-off-by: Mike Galbraith <efault@....de> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 5ad4440..4685f28 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -392,8 +392,16 @@ static inline unsigned long calc_delta_weight(unsigned long delta, struct sched_entity *se) { for_each_sched_entity(se) { - delta = calc_delta_mine(delta, - se->load.weight, &cfs_rq_of(se)->load); + struct load_weight *load = &cfs_rq_of(se)->load; + + if (unlikely(!se->on_rq)) { + struct load_weight tmp; + + tmp.weight = load->weight + se->load.weight; + tmp.inv_weight = 0; + load = &tmp; + } + delta = calc_delta_mine(delta, se->load.weight, load); } return delta; -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists