lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.58.0710091645110.25406@gandalf.stny.rr.com>
Date:	Tue, 9 Oct 2007 16:50:47 -0400 (EDT)
From:	Steven Rostedt <rostedt@...dmis.org>
To:	mike kravetz <kravetz@...ibm.com>
cc:	Gregory Haskins <ghaskins@...ell.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...e.hu>,
	linux-rt-users <linux-rt-users@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>, pmorreale@...ell.com,
	sdietrich@...ell.com
Subject: Re: [RFC PATCH RT] push waiting rt tasks to cpus with lower prios.


--
On Tue, 9 Oct 2007, mike kravetz wrote:

>
> I did something like this a while ago for another scheduling project.
> A couple 'possible' optimizations to think about are:
> 1) Only scan the remote runqueues once and keep a local copy of the
>    remote priorities for subsequent 'scans'.  Accessing the remote
>    runqueus (CPU specific cache lines) can be expensive.

You mean to keep the copy for the next two tries?

> 2) When verifying priorities, just perform spin_trylock() on the remote
>    runqueue.  If you can immediately get it great.  If not, it implies
>    someone else is messing with the runqueue and there is a good chance
>    the data you pre-fetched (curr->Priority) is invalid.  In this case
>    it might be faster to just 'move on' to the next candidate runqueue/CPU.
>    i.e. The next highest priority that the new task can preempt.

I was a bit scared of grabing the lock anyway, because that's another
cache hit (write side). So only grabbing the lock when needed would save
us from dirting the runqueue lock for each CPU.

>
> Of course, these 'optimizations' would change the algorithm.  Trying to
> make any decision based on data that is changing is always a crap shoot. :)

Yes indeed. The aim for now is to solve the latencies that you've been
seeing. But really, there is still holes (small ones) that can cause a
latency if a schedule happened "just right". Hopefully the final result of
this work will close them too.

-- Steve

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ