lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <1356799386-4212-17-git-send-email-fweisbec@gmail.com> Date: Sat, 29 Dec 2012 17:42:55 +0100 From: Frederic Weisbecker <fweisbec@...il.com> To: LKML <linux-kernel@...r.kernel.org> Cc: Frederic Weisbecker <fweisbec@...il.com>, Alessio Igor Bogani <abogani@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>, Chris Metcalf <cmetcalf@...era.com>, Christoph Lameter <cl@...ux.com>, Geoff Levand <geoff@...radead.org>, Gilad Ben Yossef <gilad@...yossef.com>, Hakan Akkan <hakanakkan@...il.com>, Ingo Molnar <mingo@...nel.org>, "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>, Paul Gortmaker <paul.gortmaker@...driver.com>, Peter Zijlstra <peterz@...radead.org>, Steven Rostedt <rostedt@...dmis.org>, Thomas Gleixner <tglx@...utronix.de> Subject: [PATCH 16/27] sched: Update clock of nohz busiest rq before balancing move_tasks() and active_load_balance_cpu_stop() both need to have the busiest rq clock uptodate because they may end up calling can_migrate_task() that uses rq->clock_task to determine if the task running in the busiest runqueue is cache hot. Hence if the busiest runqueue is tickless, update its clock before reading it. Signed-off-by: Frederic Weisbecker <fweisbec@...il.com> Cc: Alessio Igor Bogani <abogani@...nel.org> Cc: Andrew Morton <akpm@...ux-foundation.org> Cc: Chris Metcalf <cmetcalf@...era.com> Cc: Christoph Lameter <cl@...ux.com> Cc: Geoff Levand <geoff@...radead.org> Cc: Gilad Ben Yossef <gilad@...yossef.com> Cc: Hakan Akkan <hakanakkan@...il.com> Cc: Ingo Molnar <mingo@...nel.org> Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com> Cc: Paul Gortmaker <paul.gortmaker@...driver.com> Cc: Peter Zijlstra <peterz@...radead.org> Cc: Steven Rostedt <rostedt@...dmis.org> Cc: Thomas Gleixner <tglx@...utronix.de> [ Forward port conflicts ] Signed-off-by: Steven Rostedt <rostedt@...dmis.org> --- kernel/sched/fair.c | 17 +++++++++++++++++ 1 files changed, 17 insertions(+), 0 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3d65ac7..e78d81104 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5002,6 +5002,7 @@ static int load_balance(int this_cpu, struct rq *this_rq, { int ld_moved, cur_ld_moved, active_balance = 0; int lb_iterations, max_lb_iterations; + int clock_updated; struct sched_group *group; struct rq *busiest; unsigned long flags; @@ -5045,6 +5046,7 @@ redo: ld_moved = 0; lb_iterations = 1; + clock_updated = 0; if (busiest->nr_running > 1) { /* * Attempt to move tasks. If find_busiest_group has found @@ -5068,6 +5070,14 @@ more_balance: */ cur_ld_moved = move_tasks(&env); ld_moved += cur_ld_moved; + + /* + * Move tasks may end up calling can_migrate_task() which + * requires an uptodate value of the rq clock. + */ + update_nohz_rq_clock(busiest); + clock_updated = 1; + double_rq_unlock(env.dst_rq, busiest); local_irq_restore(flags); @@ -5163,6 +5173,13 @@ more_balance: busiest->active_balance = 1; busiest->push_cpu = this_cpu; active_balance = 1; + /* + * active_load_balance_cpu_stop may end up calling + * can_migrate_task() which requires an uptodate + * value of the rq clock. + */ + if (!clock_updated) + update_nohz_rq_clock(busiest); } raw_spin_unlock_irqrestore(&busiest->lock, flags); -- 1.7.5.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists