lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 03 Dec 2007 16:39:11 -0500
From:	Mark Lord <lkml@....ca>
To:	Chris Friesen <cfriesen@...tel.com>
Cc:	davids@...master.com, Nick Piggin <nickpiggin@...oo.com.au>,
	Ingo Molnar <mingo@...e.hu>,
	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
	Arjan van de Ven <arjan@...radead.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: sched_yield: delete sysctl_sched_compat_yield

Chris Friesen wrote:
> David Schwartz wrote:
>> Chris Friesen wrote:
..
>>> The problem is where do we insert the task that is yielding?  CFS is
>>> based around a tree structure ordered by time.
> 
>> We put it exactly where we would have when its timeslice ran out. If 
>> we can
>> reward it a little bit, that's great. But if not, we can live with that.
>> Just imagine that the timer interrupt fired to indicate the end of the
>> thread's run time when the thread called 'sched_yield'.
> 
> CFS doesn't really do "timeslice".  But in essence what you are 
> describing is the default behaviour currently...it simply removes the 
> task from the tree and reinserts it based on how much cpu time it used up.
> 
>> Then what does he do when the task runs out of run time? It's hard to
>> imagine we can't do that when the task calls sched_yield.
> 
> It gets reinserted into the tree at a position based on how much cpu 
> time it used.  This is exactly the current sched_yield() behaviour.
..

That's not the same thing at all.
I think that David is suggesting that the reinsertion logic
should pretend that the task used up all of the CPU time it
was offered in the slot leading up to the sched_yield() call.

If it did that, then the task would be far more likely not to
end up as the next task chosen to run.

Without doing that, the task is highly likely to be chosen
to run again immediately, as it will appear to have done
nothing since it was previously chosen -- and so the same 
criteria will result in it being chosen again, and again,
and again, until it finally wastes enough cycles to not
be reinserted into the "currently active" slot of the tree.

Cheers
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ