lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <b17a3db703494bdae8492843f7303c13b0dbd0fc.1695704179.git.yu.c.chen@intel.com>
Date:   Tue, 26 Sep 2023 13:11:02 +0800
From:   Chen Yu <yu.c.chen@...el.com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Ingo Molnar <mingo@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Juri Lelli <juri.lelli@...hat.com>
Cc:     Tim Chen <tim.c.chen@...el.com>, Aaron Lu <aaron.lu@...el.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Valentin Schneider <vschneid@...hat.com>,
        K Prateek Nayak <kprateek.nayak@....com>,
        "Gautham R . Shenoy" <gautham.shenoy@....com>,
        linux-kernel@...r.kernel.org, Chen Yu <yu.chen.surf@...il.com>,
        Chen Yu <yu.c.chen@...el.com>
Subject: [PATCH 1/2] sched/fair: Record the short sleeping time of a task

During task wakeup, the wakee firstly checks if its previous
running CPU is idle. If yes, choose that CPU as its first
choice. However, in most cases, the wakee's previous CPU
could be chosen by someone else, which breaks the cache
locality.

Proposes a mechanism to reserve the task's previous
CPU for a short while. In this reservation period, other
tasks are not allowed to pick that CPU until a timeout.
The reservation period is defined as the average short
sleep time of the task. To be more specific, it is the
time delta between this task being dequeued and enqueued.
Only the sleep time shorter than sysctl_sched_migration_cost
will be recorded. If the sleep time is longer than
sysctl_sched_migration_cost, give the reservation period
a penalty by shrinking it to half. In this way, the 'burst'
sleeping time of the task is honored, meanwhile, if that
task becomes a long-sleeper, the reservation time of that
task is shrunk to reduce the impact on task wakeup.

Suggested-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Signed-off-by: Chen Yu <yu.c.chen@...el.com>
---
 include/linux/sched.h |  3 +++
 kernel/sched/fair.c   | 21 +++++++++++++++++++++
 2 files changed, 24 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index dc37ae787e33..4a0ac0276384 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -561,6 +561,9 @@ struct sched_entity {
 	u64				vruntime;
 	s64				vlag;
 	u64				slice;
+	u64				prev_dequeue_time;
+	/* the reservation period of this task during wakeup */
+	u64				sis_rsv_avg;
 
 	u64				nr_migrations;
 
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d0877878bcdb..297b9470829c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6456,6 +6456,24 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	struct sched_entity *se = &p->se;
 	int idle_h_nr_running = task_has_idle_policy(p);
 	int task_new = !(flags & ENQUEUE_WAKEUP);
+	u64 last_dequeue = p->se.prev_dequeue_time;
+	u64 now = sched_clock_cpu(task_cpu(p));
+
+	/*
+	 * If the task is a short-sleepting task, there is no need
+	 * to migrate it to other CPUs. Estimate the average short sleeping
+	 * time of the wakee. This sleep time is used as a hint to reserve
+	 * the dequeued task's previous CPU for a short while. During this
+	 * reservation period, select_idle_cpu() prevents other wakees from
+	 * choosing this CPU. This could bring a better cache locality.
+	 */
+	if ((flags & ENQUEUE_WAKEUP) && last_dequeue && cpu_online(task_cpu(p)) &&
+	    now > last_dequeue) {
+		if (now - last_dequeue < sysctl_sched_migration_cost)
+			update_avg(&p->se.sis_rsv_avg, now - last_dequeue);
+		else
+			p->se.sis_rsv_avg >>= 1;
+	}
 
 	/*
 	 * The code below (indirectly) updates schedutil which looks at
@@ -6550,6 +6568,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	int task_sleep = flags & DEQUEUE_SLEEP;
 	int idle_h_nr_running = task_has_idle_policy(p);
 	bool was_sched_idle = sched_idle_rq(rq);
+	u64 now;
 
 	util_est_dequeue(&rq->cfs, p);
 
@@ -6611,6 +6630,8 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 dequeue_throttle:
 	util_est_update(&rq->cfs, p, task_sleep);
 	hrtick_update(rq);
+	now = sched_clock_cpu(cpu_of(rq));
+	p->se.prev_dequeue_time = task_sleep ? now : 0;
 }
 
 #ifdef CONFIG_SMP
-- 
2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ