lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <200703121559.23157.jos@mijnkamer.nl>
Date:	Mon, 12 Mar 2007 15:58:44 +0100
From:	jos poortvliet <jos@...nkamer.nl>
To:	ck@....kolivas.org
Cc:	Al Boldi <a1426z@...ab.com>, Con Kolivas <kernel@...ivas.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free
 interactive cpu scheduler

Op Monday 12 March 2007, schreef Al Boldi:
> Con Kolivas wrote:
> > > > The higher priority one always get 6-7ms whereas the lower priority
> > > > one runs 6-7ms and then one larger perfectly bound expiration amount.
> > > > Basically exactly as I'd expect. The higher priority task gets
> > > > precisely RR_INTERVAL maximum latency whereas the lower priority task
> > > > gets RR_INTERVAL min and full expiration (according to the virtual
> > > > deadline) as a maximum. That's exactly how I intend it to work. Yes I
> > > > realise that the max latency ends up being longer intermittently on
> > > > the niced task but that's -in my opinion- perfectly fine as a
> > > > compromise to ensure the nice 0 one always gets low latency.
> > >
> > > I think, it should be possible to spread this max expiration latency
> > > across the rotation, should it not?
> >
> > There is a way that I toyed with of creating maps of slots to use for
> > each different priority, but it broke the O(1) nature of the virtual
> > deadline management. Minimising algorithmic complexity seemed more
> > important to maintain than getting slightly better latency spreads for
> > niced tasks. It also appeared to be less cache friendly in design. I
> > could certainly try and implement it but how much importance are we to
> > place on latency of niced tasks? Are you aware of any usage scenario
> > where latency sensitive tasks are ever significantly niced in the real
> > world?
>
> It only takes one negatively nice'd proc to affect X adversely.

Then, maybe, we should start nicing X again, like we did/had to do until a few 
years ago? Or should we just wait until X gets fixed (after all, development 
goes faster than ever)? Or is this really the scheduler's fault?

> Thanks!
>
> --
> Al
>
> _______________________________________________
> http://ck.kolivas.org/faqs/replying-to-mailing-list.txt
> ck mailing list - mailto: ck@....kolivas.org
> http://vds.kolivas.org/mailman/listinfo/ck



-- 
Disclaimer:

Alles wat ik doe denk en zeg is gebaseerd op het wereldbeeld wat ik nu heb. 
Ik ben niet verantwoordelijk voor wijzigingen van de wereld, of het beeld wat 
ik daarvan heb, noch voor de daaruit voortvloeiende gedragingen van mezelf. 
Alles wat ik zeg is aardig bedoeld, tenzij expliciet vermeld.

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ