lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 23 Mar 2016 17:54:44 +0530 From: Srikar Dronamraju <srikar@...ux.vnet.ibm.com> To: Ingo Molnar <mingo@...nel.org>, Peter Zijlstra <peterz@...radead.org> Cc: linux-kernel@...r.kernel.org, srikar@...ux.vnet.ibm.com, Rik van Riel <riel@...hat.com> Subject: [PATCH 2/3] Reset nr_balance_failed after active balancing To force a task migration during active balancing, nr_balance_failed is set to cache_nice_tries + 1. However nr_balance_failed is not reset. As a side effect, the next regular load balance under the same sd, a cache hot task might be migrated, just because nr_balance_failed count is high. Resetting the nr_balance_failed after a successful active balance, ensures that a hot task is not unreasonably migrated. This can be verified by looking at number of hot task migrations reported by /proc/schedstat. Signed-off-by: Srikar Dronamraju <srikar@...ux.vnet.ibm.com> --- kernel/sched/fair.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 9abfb16..fae05f4 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7294,10 +7294,7 @@ static int load_balance(int this_cpu, struct rq *this_rq, &busiest->active_balance_work); } - /* - * We've kicked active balancing, reset the failure - * counter. - */ + /* We've kicked active balancing, force task migration. */ sd->nr_balance_failed = sd->cache_nice_tries+1; } } else @@ -7532,10 +7529,13 @@ static int active_load_balance_cpu_stop(void *data) schedstat_inc(sd, alb_count); p = detach_one_task(&env); - if (p) + if (p) { schedstat_inc(sd, alb_pushed); - else + /* Active balancing done, reset the failure counter. */ + sd->nr_balance_failed = 0; + } else { schedstat_inc(sd, alb_failed); + } } rcu_read_unlock(); out_unlock: -- 1.8.3.1
Powered by blists - more mailing lists