lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200703170950.23256.edt@aei.ca>
Date:	Sat, 17 Mar 2007 09:50:22 -0400
From:	Ed Tomlinson <edt@....ca>
To:	Con Kolivas <kernel@...ivas.org>
Cc:	Al Boldi <a1426z@...ab.com>, ck@....kolivas.org,
	Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: RSDL v0.31

On Saturday 17 March 2007 00:57, Con Kolivas wrote:
> On Saturday 17 March 2007 15:40, Al Boldi wrote:
> > Con Kolivas wrote:
> > > On Saturday 17 March 2007 08:55, Al Boldi wrote:
> > > > With X nice'd at -10, and 11 hogs loading the cpu, interactivity looks
> > > > good until the default timeslice/quota is exhausted and slows down.
> > > > Maybe adjusting this according to nice could help.
> > >
> > > Not sure what you mean by that. It's still a fair distribution system,
> > > it's just that nice -10 has quite a bit more cpu allocated than nice 0.
> > > Eventually if you throw enough load at it it will slow down too.
> >
> > I mean #DEF_TIMESLICE seems like an initial quota, which gets reset with
> > each major rotation.  Increasing this according to nice may give it more
> > room for an interactivity boost, before being expired.  Alternatively, you
> > may just reset the rotation, if the task didn't use it's quota within one
> > full major rotation, effectively skipping the expiry toll.
> 
> DEF_TIMESLICE is a value used for smp balancing and has no effect on quota so 
> I doubt you mean that value. The quota you're describing of not resetting is 
> something like the sleep average idea of current systems where you accumulate 
> bonus points by sleeping when you would be running and redeem them later. 
> This is exactly the system I'm avoiding using in rsdl as you'd have to decide 
> just how much sleep time it could accumulate, and over how long it would run 
> out, and so on. ie that's the interactivity estimator. This is the system 
> that destroys any guarantee of cpu percentage, and ends up leading to periods 
> of relative starvation, is open to tuning that can either be too short or too 
> long depending on the values you chose and so on.
> 
> > > > It may also be advisable to fix latencies according to nice, and adjust
> > > > timeslices instead.  This may help scaleability a lot, as there are
> > > > some timing sensitive apps that may crash under high load.
> > >
> > > You will find that is the case already with this version. Even under
> > > heavy load if you were to be running one server niced (say httpd nice 19
> > > in the presence of mysql nice 0) the latencies would be drastically
> > > reduced compared to mainline behaviour. I am aware this becomes an issue
> > > for some heavily loaded servers because some servers run multithreaded
> > > while others do not, forcing the admins to nice their multithreaded ones.
> >
> > The thing is, latencies are currently dependent on the number of tasks in
> > the run-queue; i.e. more rq-tasks means higher latencies, yet fixed
> > timeslices according to nice.  Just switching this the other way around, by
> > fixing latencies according to nice, and adjusting the timeslices depending
> > on rq-load, may yield a much more scalable system.
> 
> That is not really feasible to implement. How can you guarantee latencies when 
> the system is overloaded? If you have 1000 tasks all trying to get scheduled 
> in say 10ms you end up running for only 10 microseconds at a time. That will 
> achieve the exact opposite whereby as the load increases the runtime gets 
> shorter and shorter till cpu cache trashing and no real work occurs.

Con

Think that Al may have a point.  Yes there are cases where it not work well.  How about
reversing the idea?  Set the initial timeslice so it will give good latency with a fairly high 
load.  Increase the timeslice when the load is lower than the load we attempt to guarantee
good latency for.  Idea being to reduce the number of context switches while retaining a
fixed latency.

Ed Tomlinson
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ