lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 12 Mar 2007 15:53:30 +1100
From:	Con Kolivas <kernel@...ivas.org>
To:	Al Boldi <a1426z@...ab.com>
Cc:	ck list <ck@....kolivas.org>, linux-kernel@...r.kernel.org
Subject: Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

On Monday 12 March 2007 15:42, Al Boldi wrote:
> Con Kolivas wrote:
> > On Monday 12 March 2007 08:52, Con Kolivas wrote:
> > > And thank you! I think I know what's going on now. I think each
> > > rotation is followed by another rotation before the higher priority
> > > task is getting a look in in schedule() to even get quota and add it to
> > > the runqueue quota. I'll try a simple change to see if that helps.
> > > Patch coming up shortly.
> >
> > Can you try the following patch and see if it helps. There's also one
> > minor preemption logic fix in there that I'm planning on including.
> > Thanks!
>
> Applied on top of v0.28 mainline, and there is no difference.
>
> What's it look like on your machine?

The higher priority one always get 6-7ms whereas the lower priority one runs 
6-7ms and then one larger perfectly bound expiration amount. Basically 
exactly as I'd expect. The higher priority task gets precisely RR_INTERVAL 
maximum latency whereas the lower priority task gets RR_INTERVAL min and full 
expiration (according to the virtual deadline) as a maximum. That's exactly 
how I intend it to work. Yes I realise that the max latency ends up being 
longer intermittently on the niced task but that's -in my opinion- perfectly 
fine as a compromise to ensure the nice 0 one always gets low latency.

Eg:
nice 0 vs nice 10

nice 0:
pid 6288, prio   0, out for    7 ms
pid 6288, prio   0, out for    6 ms
pid 6288, prio   0, out for    6 ms
pid 6288, prio   0, out for    6 ms
pid 6288, prio   0, out for    6 ms
pid 6288, prio   0, out for    6 ms
pid 6288, prio   0, out for    6 ms
pid 6288, prio   0, out for    6 ms
pid 6288, prio   0, out for    6 ms
pid 6288, prio   0, out for    6 ms
pid 6288, prio   0, out for    6 ms
pid 6288, prio   0, out for    6 ms
pid 6288, prio   0, out for    6 ms

nice 10:
pid 6290, prio  10, out for    6 ms
pid 6290, prio  10, out for    6 ms
pid 6290, prio  10, out for    6 ms
pid 6290, prio  10, out for    6 ms
pid 6290, prio  10, out for    6 ms
pid 6290, prio  10, out for    6 ms
pid 6290, prio  10, out for    6 ms
pid 6290, prio  10, out for    6 ms
pid 6290, prio  10, out for    6 ms
pid 6290, prio  10, out for   66 ms
pid 6290, prio  10, out for    6 ms
pid 6290, prio  10, out for    6 ms
pid 6290, prio  10, out for    6 ms

exactly as I'd expect. If you want fixed latencies _of niced tasks_ in the 
presence of less niced tasks you will not get them with this scheduler. What 
you will get, though, is a perfectly bound relationship knowing exactly what 
the maximum latency will ever be.

Thanks for the test case. It's interesting and nice that it confirms this 
scheduler works as I expect it to.

-- 
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ