lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240624102331.GI31592@noisy.programming.kicks-ass.net>
Date: Mon, 24 Jun 2024 12:23:31 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Tejun Heo <tj@...nel.org>
Cc: torvalds@...ux-foundation.org, mingo@...hat.com, juri.lelli@...hat.com,
	vincent.guittot@...aro.org, dietmar.eggemann@....com,
	rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
	bristot@...hat.com, vschneid@...hat.com, ast@...nel.org,
	daniel@...earbox.net, andrii@...nel.org, martin.lau@...nel.org,
	joshdon@...gle.com, brho@...gle.com, pjt@...gle.com,
	derkling@...gle.com, haoluo@...gle.com, dvernet@...a.com,
	dschatzberg@...a.com, dskarlat@...cmu.edu, riel@...riel.com,
	changwoo@...lia.com, himadrics@...ia.fr, memxor@...il.com,
	andrea.righi@...onical.com, joel@...lfernandes.org,
	linux-kernel@...r.kernel.org, bpf@...r.kernel.org,
	kernel-team@...a.com
Subject: Re: [PATCH 04/39] sched: Add sched_class->reweight_task()

On Wed, May 01, 2024 at 05:09:39AM -1000, Tejun Heo wrote:
> Currently, during a task weight change, sched core directly calls
> reweight_task() defined in fair.c if @p is on CFS. Let's make it a proper
> sched_class operation instead. CFS's reweight_task() is renamed to
> reweight_task_fair() and now called through sched_class.
> 
> While it turns a direct call into an indirect one, set_load_weight() isn't
> called from a hot path and this change shouldn't cause any noticeable
> difference. This will be used to implement reweight_task for a new BPF
> extensible sched_class so that it can keep its cached task weight
> up-to-date.
> 
> Signed-off-by: Tejun Heo <tj@...nel.org>
> Reviewed-by: David Vernet <dvernet@...a.com>
> Acked-by: Josh Don <joshdon@...gle.com>
> Acked-by: Hao Luo <haoluo@...gle.com>
> Acked-by: Barret Rhoden <brho@...gle.com>
> ---
>  kernel/sched/core.c  | 4 ++--
>  kernel/sched/fair.c  | 3 ++-
>  kernel/sched/sched.h | 4 ++--
>  3 files changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index b12b1b7405fd..4b9cb2228b04 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -1342,8 +1342,8 @@ static void set_load_weight(struct task_struct *p, bool update_load)
>  	 * SCHED_OTHER tasks have to update their load when changing their
>  	 * weight
>  	 */
> -	if (update_load && p->sched_class == &fair_sched_class) {
> -		reweight_task(p, prio);
> +	if (update_load && p->sched_class->reweight_task) {
> +		p->sched_class->reweight_task(task_rq(p), p, prio);
>  	} else {
>  		load->weight = scale_load(sched_prio_to_weight[prio]);
>  		load->inv_weight = sched_prio_to_wmult[prio];

This reminds me, I think we have a bug here...

  https://lkml.kernel.org/r/20240422094157.GA34453@noisy.programming.kicks-ass.net

I *think* we want something like the below, hmm?


diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0935f9d4bb7b..32a40d85c0b1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1328,15 +1328,15 @@ int tg_nop(struct task_group *tg, void *data)
 void set_load_weight(struct task_struct *p, bool update_load)
 {
 	int prio = p->static_prio - MAX_RT_PRIO;
-	struct load_weight *load = &p->se.load;
+	unsigned long weight;
+	u32 inv_weight;
 
-	/*
-	 * SCHED_IDLE tasks get minimal weight:
-	 */
 	if (task_has_idle_policy(p)) {
-		load->weight = scale_load(WEIGHT_IDLEPRIO);
-		load->inv_weight = WMULT_IDLEPRIO;
-		return;
+		weight = scale_load(WEIGHT_IDLEPRIO);
+		inv_weight = WMULT_IDLEPRIO;
+	} else {
+		weight = scale_load(sched_prio_to_weight[prio]);
+		inv_weight = sched_prio_to_wmult[prio];
 	}
 
 	/*
@@ -1344,10 +1344,11 @@ void set_load_weight(struct task_struct *p, bool update_load)
 	 * weight
 	 */
 	if (update_load && p->sched_class == &fair_sched_class) {
-		reweight_task(p, prio);
+		reweight_task(p, weight, inv_weight);
 	} else {
-		load->weight = scale_load(sched_prio_to_weight[prio]);
-		load->inv_weight = sched_prio_to_wmult[prio];
+		struct load_weight *lw = &p->se.load;
+		lw->weight = weight;
+		lw->inv_weight = inv_weight;
 	}
 }
 
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 41b58387023d..07398042e342 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3835,7 +3835,7 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
 	}
 }
 
-void reweight_task(struct task_struct *p, int prio)
+void reweight_task(struct task_struct *p, unsigned long weight, u32 inv_weight)
 {
 	struct sched_entity *se = &p->se;
 	struct cfs_rq *cfs_rq = cfs_rq_of(se);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 62fd8bc6fd08..c1d07957e38a 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2509,7 +2509,7 @@ extern void init_sched_dl_class(void);
 extern void init_sched_rt_class(void);
 extern void init_sched_fair_class(void);
 
-extern void reweight_task(struct task_struct *p, int prio);
+extern void reweight_task(struct task_struct *p, unsigned long weight, u32 inv_weight);
 
 extern void resched_curr(struct rq *rq);
 extern void resched_cpu(int cpu);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ