lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20171212124435.070390874@linuxfoundation.org> Date: Tue, 12 Dec 2017 13:44:39 +0100 From: Greg Kroah-Hartman <gregkh@...uxfoundation.org> To: linux-kernel@...r.kernel.org Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>, stable@...r.kernel.org, kitsunyan <kitsunyan@...ox.ru>, "Peter Zijlstra (Intel)" <peterz@...radead.org>, Chris Mason <clm@...com>, Linus Torvalds <torvalds@...ux-foundation.org>, Mike Galbraith <efault@....de>, Mike Galbraith <umgwanakikbuti@...il.com>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...nel.org>, Sasha Levin <alexander.levin@...izon.com> Subject: [PATCH 4.9 069/148] sched/fair: Make select_idle_cpu() more aggressive 4.9-stable review patch. If anyone has any objections, please let me know. ------------------ From: Peter Zijlstra <peterz@...radead.org> [ Upstream commit 4c77b18cf8b7ab37c7d5737b4609010d2ceec5f0 ] Kitsunyan reported desktop latency issues on his Celeron 887 because of commit: 1b568f0aabf2 ("sched/core: Optimize SCHED_SMT") ... even though his CPU doesn't do SMT. The effect of running the SMT code on a !SMT part is basically a more aggressive select_idle_cpu(). Removing the avg condition fixed things for him. I also know FB likes this test gone, even though other workloads like having it. For now, take it out by default, until we get a better idea. Reported-by: kitsunyan <kitsunyan@...ox.ru> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org> Cc: Chris Mason <clm@...com> Cc: Linus Torvalds <torvalds@...ux-foundation.org> Cc: Mike Galbraith <efault@....de> Cc: Mike Galbraith <umgwanakikbuti@...il.com> Cc: Peter Zijlstra <peterz@...radead.org> Cc: Thomas Gleixner <tglx@...utronix.de> Cc: linux-kernel@...r.kernel.org Signed-off-by: Ingo Molnar <mingo@...nel.org> Signed-off-by: Sasha Levin <alexander.levin@...izon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org> --- kernel/sched/fair.c | 2 +- kernel/sched/features.h | 5 +++++ 2 files changed, 6 insertions(+), 1 deletion(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5451,7 +5451,7 @@ static int select_idle_cpu(struct task_s * Due to large variance we need a large fuzz factor; hackbench in * particularly is sensitive here. */ - if ((avg_idle / 512) < avg_cost) + if (sched_feat(SIS_AVG_CPU) && (avg_idle / 512) < avg_cost) return -1; time = local_clock(); --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -51,6 +51,11 @@ SCHED_FEAT(NONTASK_CAPACITY, true) */ SCHED_FEAT(TTWU_QUEUE, true) +/* + * When doing wakeups, attempt to limit superfluous scans of the LLC domain. + */ +SCHED_FEAT(SIS_AVG_CPU, false) + #ifdef HAVE_RT_PUSH_IPI /* * In order to avoid a thundering herd attack of CPUs that are
Powered by blists - more mailing lists