lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231231175218.510721-1-qyousef@layalina.io>
Date: Sun, 31 Dec 2023 17:52:16 +0000
From: Qais Yousef <qyousef@...alina.io>
To: Ingo Molnar <mingo@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Dietmar Eggemann <dietmar.eggemann@....com>
Cc: linux-kernel@...r.kernel.org,
	"Pierre Gondois" <Pierre.Gondois@....com>,
	Qais Yousef <qyousef@...alina.io>
Subject: [PATCH v3 0/2] sched: Don't trigger misfit if affinity is restricted

Changes since v2:

	* Convert access of asym_cap_list to be rcu protected
	* Add new patch to sort the list in descending order
	* Move some declarations inside affinity check block
	* Remove now redundant check against max_cpu_capacity in check_misfit_status()

(thanks Pierre!)

Changes since v1:

	* Use asym_cap_list (thanks Dietmar) to iterate instead of iterating
	  through every cpu which Vincent was concerned about.
	* Use uclamped util to compare with capacity instead of util_fits_cpu()
	  when iterating through capcities (Dietmar).
	* Update commit log with test results to better demonstrate the problem

v1 discussion: https://lore.kernel.org/lkml/20230820203429.568884-1-qyousef@layalina.io/#t
v2 discussion: https://lore.kernel.org/lkml/20231212154056.626978-1-qyousef@layalina.io/

Food for thoughts:
------------------

Should misfit cause balance_interval to double? This patch will still be needed
if the answer is yes to avoid unnecessary misfit-lb to trigger repeatedly
anyway.

Should the doubling be made independent of tick value? As it stands 3 failures
for TICK = 1ms will increase it to 8ms. But for 4ms tick this will become 32ms
after 3 failures. Which I think is too high too soon.

Should the balance_interval be capped to something more reasonable? On systems
that require fast response (interactive Desktop for example),
a balance_interval of 64ms and above seem too high.


Thanks!

--
Qais Yousef

Qais Yousef (2):
  sched/fair: Check a task has a fitting cpu when updating misfit
  sched/topology: Sort asym_cap_list in descending order

 kernel/sched/fair.c     | 65 ++++++++++++++++++++++++++++++++++-------
 kernel/sched/sched.h    | 14 +++++++++
 kernel/sched/topology.c | 43 +++++++++++++++------------
 3 files changed, 94 insertions(+), 28 deletions(-)

-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ