lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250131070938.95551-5-changwoo@igalia.com>
Date: Fri, 31 Jan 2025 16:09:31 +0900
From: Changwoo Min <changwoo@...lia.com>
To: tj@...nel.org,
	void@...ifault.com,
	arighi@...dia.com
Cc: kernel-dev@...lia.com,
	linux-kernel@...r.kernel.org,
	Changwoo Min <changwoo@...lia.com>
Subject: [PATCH v3 04/11] sched_ext: Add an event, SCX_EV_DISPATCH_KEEP_LAST

Add a core event, SCX_EV_DISPATCH_KEEP_LAST, which represents how many
times a task is continued to run without ops.enqueue() when
SCX_OPS_ENQ_LAST is not set.

__scx_add_event() is used since the caller holds an rq lock,
so the preemption has already been disabled.

Signed-off-by: Changwoo Min <changwoo@...lia.com>
---
 kernel/sched/ext.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 041b0af3551a..7147f730850b 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -1455,6 +1455,12 @@ struct scx_event_stats {
 	 * the meantime. In this case, the task is bounced to the global DSQ.
 	 */
 	u64		SCX_EV_DISPATCH_LOCAL_DSQ_OFFLINE;
+
+	/*
+	 * If SCX_OPS_ENQ_LAST is not set, the number of times that a task
+	 * continued to run because there were no other tasks on the CPU.
+	 */
+	u64		SCX_EV_DISPATCH_KEEP_LAST;
 };
 
 /*
@@ -2908,6 +2914,7 @@ static int balance_one(struct rq *rq, struct task_struct *prev)
 	if (prev_on_rq && (!static_branch_unlikely(&scx_ops_enq_last) ||
 	     scx_rq_bypassing(rq))) {
 		rq->scx.flags |= SCX_RQ_BAL_KEEP;
+		__scx_add_event(SCX_EV_DISPATCH_KEEP_LAST, 1);
 		goto has_tasks;
 	}
 	rq->scx.flags &= ~SCX_RQ_IN_BALANCE;
@@ -4978,6 +4985,7 @@ static void scx_dump_state(struct scx_exit_info *ei, size_t dump_len)
 	scx_bpf_events(&events, sizeof(events));
 	scx_dump_event(s, &events, SCX_EV_SELECT_CPU_FALLBACK);
 	scx_dump_event(s, &events, SCX_EV_DISPATCH_LOCAL_DSQ_OFFLINE);
+	scx_dump_event(s, &events, SCX_EV_DISPATCH_KEEP_LAST);
 
 	if (seq_buf_has_overflowed(&s) && dump_len >= sizeof(trunc_marker))
 		memcpy(ei->dump + dump_len - sizeof(trunc_marker),
@@ -7113,6 +7121,7 @@ __bpf_kfunc void scx_bpf_events(struct scx_event_stats *events,
 		e_cpu = per_cpu_ptr(&event_stats_cpu, cpu);
 		scx_agg_event(&e_sys, e_cpu, SCX_EV_SELECT_CPU_FALLBACK);
 		scx_agg_event(&e_sys, e_cpu, SCX_EV_DISPATCH_LOCAL_DSQ_OFFLINE);
+		scx_agg_event(&e_sys, e_cpu, SCX_EV_DISPATCH_KEEP_LAST);
 	}
 
 	/*
-- 
2.48.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ