lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 11 Sep 2010 16:52:14 -0400
From:	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>,
	Steven Rostedt <rostedt@...dmis.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Tony Lindgren <tony@...mide.com>,
	Mike Galbraith <efault@....de>
Subject: Re: [RFC patch 1/2] sched: dynamically adapt granularity with
	nr_running

* Peter Zijlstra (peterz@...radead.org) wrote:
> On Sat, 2010-09-11 at 12:21 -0700, Linus Torvalds wrote:
> > On Sat, Sep 11, 2010 at 11:57 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> > >
> > > Not at all charmed, this look like random changes without conceptual
> > > integrity.
> > 
> > I wish people actually looked at the _numbers_ and reacted to them,
> > rather than argue theory.
> > 
> > Guys, we have cases of bad latency under load. That's a pretty
> > undeniable fact. Arguing against a patch because of some theoretical
> > issue without at all even acknowledging the latency improvements is, I
> > think, really bad form.
> > 
> > So please. Acknowledge the latency issue. And come up with better
> > patches, rather than just shoot down alternatives. Because if the
> > answer is just NAK with no alternative, then that answer is worthless.
> > No?
> 
> >From what I can make up:
> 
>   LAT=`cat /proc/sys/kernel/sched_latency_ns`; 
>   echo $((LAT/8)) > /proc/sys/kernel/sched_min_granularity_ns
> 
> will give you pretty much the same result as Mathieu's patch.

Not quite. Doing what you propose here would change the scheduling granularity
(thus decrease throughput because it would schedule more often) when there are
few tasks running. My approach does not: it only shrinks granularity when the
number of running tasks increases over 3.

Thanks,

Mathieu

> 
> But if you want us to change the scheduler to be more latency sensitive
> and trade in throughput for other benchmarks, we can do that.

-- 
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ