lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20071203095719.GA23106@elte.hu>
Date:	Mon, 3 Dec 2007 10:57:19 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
	Arjan van de Ven <arjan@...radead.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: sched_yield: delete sysctl_sched_compat_yield


* Nick Piggin <nickpiggin@...oo.com.au> wrote:

> > as far as desktop apps such as firefox goes, the exact opposite is 
> > true. We had two choices basically: either a "more agressive" yield 
> > than before or a "less agressive" yield. Desktop apps were reported 
> > to hurt from a "more agressive" yield (firefox for example gets some 
> > pretty bad delays), so we defaulted to the less agressive method. 
> > (and we defaulted to that in v2.6.23 already)
> 
> Yeah, I doubt the 2.6.23 scheduler will be usable for distros 
> though...

... which is a pretty gross exaggeration belied by distros already 
running v2.6.23. Sure, "enterprise" distros might not run .23 (or .22 or 
.21 or .24) because those are slow to adopt and pick _one_ upstream 
kernel every 10 releases without bothering much about anything 
inbetween. So the enterprise distros might in fact want to see 1-2 
iterations of the scheduler before they switch to it. (But by that 
argument 80% of the other upstream kernels were not used by enterprise 
distros either, so this is nothing new.)

> I was just talking about the default because I didn't know the reason 
> for the way it was set -- now that I do, we should talk about trying 
> to improve the actual code so we don't need 2 defaults.

I've got the patch below queued up: it uses the more agressive yield 
implementation for SCHED_BATCH tasks. SCHED_BATCH is a natural 
differentiator, it's a "I dont care about latency, it's all about 
throughput for me" signal from the application.

But first and foremost, do you realize that there will be no easy 
solutions to this topic, that it's not just about 'flipping a default'?

	Ingo

-------------->
Subject: sched: default to more agressive yield for SCHED_BATCH tasks
From: Ingo Molnar <mingo@...e.hu>

do more agressive yield for SCHED_BATCH tasks.

Signed-off-by: Ingo Molnar <mingo@...e.hu>
---
 kernel/sched_fair.c |    7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

Index: linux/kernel/sched_fair.c
===================================================================
--- linux.orig/kernel/sched_fair.c
+++ linux/kernel/sched_fair.c
@@ -824,8 +824,9 @@ static void dequeue_task_fair(struct rq 
  */
 static void yield_task_fair(struct rq *rq)
 {
-	struct cfs_rq *cfs_rq = task_cfs_rq(rq->curr);
-	struct sched_entity *rightmost, *se = &rq->curr->se;
+	struct task_struct *curr = rq->curr;
+	struct cfs_rq *cfs_rq = task_cfs_rq(curr);
+	struct sched_entity *rightmost, *se = &curr->se;
 
 	/*
 	 * Are we the only task in the tree?
@@ -833,7 +834,7 @@ static void yield_task_fair(struct rq *r
 	if (unlikely(cfs_rq->nr_running == 1))
 		return;
 
-	if (likely(!sysctl_sched_compat_yield)) {
+	if (likely(!sysctl_sched_compat_yield) && curr->policy != SCHED_BATCH) {
 		__update_rq_clock(rq);
 		/*
 		 * Update run-time statistics of the 'current'.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ