lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 21 Apr 2007 14:55:51 -0400
From:	Kyle Moffett <mrmacman_g4@....com>
To:	William Lee Irwin III <wli@...omorphy.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Willy Tarreau <w@....eu>, Ingo Molnar <mingo@...e.hu>,
	Con Kolivas <kernel@...ivas.org>, linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Nick Piggin <npiggin@...e.de>, Mike Galbraith <efault@....de>,
	Arjan van de Ven <arjan@...radead.org>,
	Peter Williams <pwil3058@...pond.net.au>,
	Thomas Gleixner <tglx@...utronix.de>, caglar@...dus.org.tr,
	Gene Heskett <gene.heskett@...il.com>
Subject: Re: [REPORT] cfs-v4 vs sd-0.44

On Apr 21, 2007, at 12:42:41, William Lee Irwin III wrote:
> On Sat, 21 Apr 2007, Willy Tarreau wrote:
>>> If you remember, with 50/50, I noticed some difficulties to fork  
>>> many processes. I think that during a fork(), the parent has a  
>>> higher probability of forking other processes than the child. So  
>>> at least, we should use something like 67/33 or 75/25 for parent/ 
>>> child.
>
> On Sat, Apr 21, 2007 at 09:34:07AM -0700, Linus Torvalds wrote:
>> It would be even better to simply have the rule:
>>  - child gets almost no points at startup
>>  - but when a parent does a "waitpid()" call and blocks, it will  
>> spread  out its points to the childred (the "vfork()" blocking is  
>> another case that is really the same).
>> This is a very special kind of "priority inversion" logic: you  
>> give higher priority to the things you wait for. Not because of  
>> holding any locks, but simply because a blockign waitpid really is  
>> a damn big hint that "ok, the child now works for the parent".
>
> An in-kernel scheduler API might help. void yield_to(struct  
> task_struct *)?
>
> A userspace API might be nice, too. e.g. int sched_yield_to(pid_t).

It might be nice if it was possible to actively contribute your CPU  
time to a child process.  For example:
int sched_donate(pid_t pid, struct timeval *time, int percentage);

Maybe a way to pass CPU time over a UNIX socket (analogous to  
SCM_RIGHTS), along with information on what process/user passed it   
That would make it possible to really fix X properly on a local  
system.  You could make the X client library pass CPU time to the X  
server whenever it requests a CPU-intensive rendering operation.   
Ordinarily X would nice all of its client service threads to +10, but  
when a client passes CPU time to its thread over the socket, then its  
service thread temporarily gets the scheduling properties of the  
client.  I'm not a scheduler guru, but that's what makes the most  
sense from an application-programmer point of view.

Cheers,
Kyle Moffett

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ