lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070525082951.GA25280@elte.hu>
Date:	Fri, 25 May 2007 10:29:51 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Srivatsa Vaddagiri <vatsa@...ibm.com>
Cc:	Guillaume Chazarain <guichaz@...oo.fr>,
	Nick Piggin <nickpiggin@...oo.com.au>, efault@....de,
	kernel@...ivas.org, containers@...ts.osdl.org,
	ckrm-tech@...ts.sourceforge.net, torvalds@...ux-foundation.org,
	akpm@...ux-foundation.org, pwil3058@...pond.net.au,
	tingy@...umass.edu, tong.n.li@...el.com, wli@...omorphy.com,
	linux-kernel@...r.kernel.org, Balbir Singh <balbir@...ibm.com>
Subject: Re: [RFC] [PATCH 0/3] Add group fairness to CFS


* Srivatsa Vaddagiri <vatsa@...ibm.com> wrote:

> Can you repeat your tests with this patch pls? With the patch applied, 
> I am now getting the same split between nice 0 and nice 10 task as 
> CFS-v13 provides (90:10 as reported by top )
> 
>  5418 guest     20   0  2464  304  236 R   90  0.0   5:41.40 3 hog
>  5419 guest     30  10  2460  304  236 R   10  0.0   0:43.62 3 nice10hog

btw., what are you thoughts about SMP?

it's a natural extension of your current code. I think the best approach 
would be to add a level of 'virtual CPU' objects above struct user. (how 
to set the attributes of those objects is open - possibly combine it 
with cpusets?)

That way the scheduler would first pick a "virtual CPU" to schedule, and 
then pick a user from that virtual CPU, and then a task from the user. 

To make group accounting scalable, the accounting object attached to the 
user struct should/must be per-cpu (per-vcpu) too. That way we'd have a 
clean hierarchy like:

  CPU #0 => VCPU A [ 40% ] + VCPU B [ 60% ]
  CPU #1 => VCPU C [ 30% ] + VCPU D [ 70% ]

  VCPU A => USER X [ 10% ] + USER Y [ 90% ]
  VCPU B => USER X [ 10% ] + USER Y [ 90% ]
  VCPU C => USER X [ 10% ] + USER Y [ 90% ]
  VCPU D => USER X [ 10% ] + USER Y [ 90% ]

the scheduler first picks a vcpu, then a user from a vcpu. (the actual 
external structure of the hierarchy should be opaque to the scheduler 
core, naturally, so that we can use other hierarchies too)

whenever the scheduler does accounting, it knows where in the hierarchy 
it is and updates all higher level entries too. This means that the 
accounting object for USER X is replicated for each VCPU it participates 
in.

SMP balancing is straightforward: it would fundamentally iterate through 
the same hierarchy and would attempt to keep all levels balanced - i 
abstracted away its iterators already.

Hm? 

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ