lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 18 Mar 2007 07:00:53 +0200
From:	Avi Kivity <avi@...o.co.il>
To:	William Lee Irwin III <wli@...omorphy.com>
Cc:	Ingo Molnar <mingo@...e.hu>, Con Kolivas <kernel@...ivas.org>,
	ck@....kolivas.org, Serge Belyshev <belyshev@...ni.sinp.msu.ru>,
	Al Boldi <a1426z@...ab.com>, Mike Galbraith <efault@....de>,
	linux-kernel@...r.kernel.org, Nicholas Miell <nmiell@...cast.net>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: is RSDL an "unfair" scheduler too?

William Lee Irwin III wrote:
> On Sat, Mar 17, 2007 at 10:41:01PM +0200, Avi Kivity wrote:
>   
>> Well, the heuristic here is that process == job.  I'm not sure heuristic 
>> is the right name for it, but it does point out a deficieny.
>> A cpu-bound process with many threads will overwhelm a cpu-bound single 
>> threaded threaded process.
>> A job with many processes will overwhelm a job with a single process.
>> A user with many jobs can starve a user with a single job.
>> I don't think the problem here is heuristics, rather that the 
>> scheduler's manages cpu quotas at the task level rather than at the user 
>> visible level.  If scheduling were managed at all three hierarchies I 
>> mentioned ('job' is a bit artificial, but process and user are not) then:
>> - if N users are contending for the cpu on a multiuser machine, each 
>> should get just 1/N of available cpu power.  As it is, a user can run a 
>> few of your #1 workloads (or a make -j 20) and slow every other user down
>> - your example would work perfectly (if we can communicate to the kernel 
>> what a job is)
>> - multi-threaded processes would not get an unfair advantage
>>     
>
> I like this notion very much. I should probably mention pgrp's' typical
> association with the notion of "job," at least as far as shells go.
>
>   

One day I might understand what pgrp, sessions, and all that stuff is.

> One issue this raises is prioritizing users on a system, threads within
> processes, jobs within users, etc. Maybe sessions would make sense, too,
> and classes of users, and maybe whatever they call the affairs that pid
> namespaces are a part of (someone will doubtless choke on the hierarchy
> depth implied here but it doesn't bother me in the least). It's not a
> deep or difficult issue. There just needs to be some user API to set the
> relative scheduling priorities of all these affairs within the next higher
> level of hierarchy, regardless of how many levels of hierarchy (aleph_0?).
>   

I think it follows naturally.

Note that more than the scheduler needs to be taught about this.  The 
page cache and swapper should prevent a user from swapping out too many 
of another user's pages when there is contention for memory; there 
should be per-user quotas for network and disk bandwidth, etc.  Until 
then people who want true multiuser with untrusted users will be forced 
to use ugly hacks like virtualization.  Fortunately it seems the 
container people are addressing at least a part of this.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to panic.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ