lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1312382996.18583.115.camel@gandalf.stny.rr.com>
Date:	Wed, 03 Aug 2011 10:49:56 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Hillf Danton <dhillf@...il.com>
Cc:	LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Mike Galbraith <mgalbraith@...e.de>,
	"Luis Claudio R." <lgoncalv@...hat.com>
Subject: Re: [PATCH][GIT PULL] sched/cpupri: Remove the vec->lock

On Wed, 2011-08-03 at 22:18 +0800, Hillf Danton wrote:
> On Wed, Aug 3, 2011 at 4:36 AM, Steven Rostedt <rostedt@...dmis.org> wrote:
> >    The migrate code does stress the RT tasks a bit. This shows that
> >    the loop did increase a little after the patch, but not by much.
> >    The vec code dropped dramatically. From 4.3us down to .42us.
> >    That's a 10x improvement!
> >
> >    Tested-by: Mike Galbraith <mgalbraith@...e.de>
> >    Tested-by: Luis Claudio R. Gonçalves <lgoncalv@...hat.com>
> >    Tested-by: Matthew Hank Sabins<msabins@...ux.vnet.ibm.com>
> >    Reviewed-by: Gregory Haskins <gregory.haskins@...il.com>
> >    Signed-off-by: Steven Rostedt <rostedt@...dmis.org>
> >
> Acked-by: Hillf Danton <dhillf@...il.com>

Hi Hillf,

Thanks for the ack. But I want to point out this change as something I
want you to see. Remember when I replied to you with your patches asking
about benchmarks and timings and other tests? This patch is a good
example of what I meant.

I made a change that looked obvious. But obvious is not good enough when
you are dealing with the Linux scheduler. Before posting it, I created a
timing patch to record the timings of the affected area for any work
load. I then passed this patch with the timing changes to various people
that reported issues with this part of the code. I also ran on my own
boxes.

The result was outstanding. That is, everyone that reported back to me
found improvements and no regressions. The improvements were not just in
the timing measurements that I included, but also with their own tests.

Now I'm comfortable with this change.

You sent several patches to me that modified the scheduler in non
trivial ways, with no benchmarks or tests attached. Before making any
changes to the scheduler, you need to have something that shows that
those changes improve things and do not cause regressions.

I sent these patches out over a month ago to get these results. I'm
putting this change in for v3.2, that way it can get even more testing
in linux-next to make sure we didn't miss anything.

This is what I want you to understand. That the scheduler is a core
aspect of Linux, and if we mess it up, it will affect everyone. We can't
take that lightly.

Thanks!

-- Steve



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ