lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1366705662-3587-1-git-send-email-iamjoonsoo.kim@lge.com>
Date:	Tue, 23 Apr 2013 17:27:36 +0900
From:	Joonsoo Kim <iamjoonsoo.kim@....com>
To:	Ingo Molnar <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>
Cc:	linux-kernel@...r.kernel.org,
	Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
	Davidlohr Bueso <davidlohr.bueso@...com>,
	Jason Low <jason.low2@...com>,
	Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: [PATCH v3 0/6] correct load_balance()

Commit 88b8dac0 makes load_balance() consider other cpus in its group.
But, there are some missing parts for this feature to work properly.
This patchset correct these things and make load_balance() robust.

Others are related to LBF_ALL_PINNED. This is fallback functionality
when all tasks can't be moved as cpu affinity. But, currently,
if imbalance is not large enough to task's load, we leave LBF_ALL_PINNED
flag and 'redo' is triggered. This is not our intention, so correct it.

These are based on sched/core branch in tip tree.

Changelog
v2->v3: Changes from Peter's suggestion
 [2/6]: change comment
 [3/6]: fix coding style
 [6/6]: fix coding style, fix changelog

v1->v2: Changes from Peter's suggestion
 [4/6]: don't include a code to evaluate load value in can_migrate_task()
 [5/6]: rename load_balance_tmpmask to load_balance_mask
 [6/6]: not use one more cpumasks, use env's cpus for prevent to re-select

Joonsoo Kim (6):
  sched: change position of resched_cpu() in load_balance()
  sched: explicitly cpu_idle_type checking in rebalance_domains()
  sched: don't consider other cpus in our group in case of NEWLY_IDLE
  sched: move up affinity check to mitigate useless redoing overhead
  sched: rename load_balance_tmpmask to load_balance_mask
  sched: prevent to re-select dst-cpu in load_balance()

 kernel/sched/core.c |    4 +--
 kernel/sched/fair.c |   69 +++++++++++++++++++++++++++++----------------------
 2 files changed, 41 insertions(+), 32 deletions(-)

-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ