lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200121063307.17221-3-parth@linux.ibm.com>
Date:   Tue, 21 Jan 2020 12:03:04 +0530
From:   Parth Shah <parth@...ux.ibm.com>
To:     linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org
Cc:     peterz@...radead.org, mingo@...hat.com, vincent.guittot@...aro.org,
        dietmar.eggemann@....com, patrick.bellasi@...bug.net,
        valentin.schneider@....com, pavel@....cz, dsmythies@...us.net,
        qperret@...gle.com, tim.c.chen@...ux.intel.com
Subject: [RFC v6 2/5] sched/core: Update turbo_sched count only when required

Use the get/put methods to add/remove the use of TurboSched support, such
that the feature is turned on only in the presence of atleast one
classified small bckground task.

Signed-off-by: Parth Shah <parth@...ux.ibm.com>
---
 kernel/sched/core.c  | 9 +++++++++
 kernel/sched/sched.h | 3 +++
 2 files changed, 12 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index dfbb52d66b29..629c2589d727 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3272,6 +3272,9 @@ static struct rq *finish_task_switch(struct task_struct *prev)
 		mmdrop(mm);
 	}
 	if (unlikely(prev_state == TASK_DEAD)) {
+		if (unlikely(is_bg_task(prev)))
+			turbo_sched_put();
+
 		if (prev->sched_class->task_dead)
 			prev->sched_class->task_dead(prev);
 
@@ -4800,6 +4803,8 @@ static int __sched_setscheduler(struct task_struct *p,
 	int reset_on_fork;
 	int queue_flags = DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
 	struct rq *rq;
+	bool attr_leniency = bgtask_latency(attr->sched_latency_nice);
+
 
 	/* The pi code expects interrupts enabled */
 	BUG_ON(pi && in_interrupt());
@@ -5024,6 +5029,10 @@ static int __sched_setscheduler(struct task_struct *p,
 
 	prev_class = p->sched_class;
 
+	/* Refcount tasks classified as a small background task */
+	if (is_bg_task(p) != attr_leniency)
+		attr_leniency ? turbo_sched_get() : turbo_sched_put();
+
 	__setscheduler(rq, p, attr, pi);
 	__setscheduler_uclamp(p, attr);
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index f841297b7d56..0a00e16e033a 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2498,6 +2498,9 @@ static inline void membarrier_switch_mm(struct rq *rq,
 }
 #endif
 
+#define bgtask_latency(lat)	((lat) == MAX_LATENCY_NICE)
+#define is_bg_task(p)		(bgtask_latency((p)->latency_nice))
+
 void turbo_sched_get(void);
 void turbo_sched_put(void);
 
-- 
2.17.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ