[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1ca41c0f0907161534j1918bb44m73f8005d47258534@mail.gmail.com>
Date: Thu, 16 Jul 2009 17:34:25 -0500
From: Karthik Singaram Lakshmanan <karthiksingaram@...il.com>
To: Ted Baker <baker@...fsu.edu>
Cc: Peter Zijlstra <peterz@...radead.org>,
Chris Friesen <cfriesen@...tel.com>,
Noah Watkins <jayhawk@....ucsc.edu>,
Raistlin <raistlin@...ux.it>,
Douglas Niehaus <niehaus@...c.ku.edu>,
Henrik Austad <henrik@...tad.us>,
LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
Bill Huey <billh@...ppy.monkey.org>,
Linux RT <linux-rt-users@...r.kernel.org>,
Fabio Checconi <fabio@...dalf.sssup.it>,
"James H. Anderson" <anderson@...unc.edu>,
Thomas Gleixner <tglx@...utronix.de>,
Dhaval Giani <dhaval.giani@...il.com>,
KUSP Google Group <kusp@...glegroups.com>,
Tommaso Cucinotta <cucinotta@...up.it>,
Giuseppe Lipari <lipari@...is.sssup.it>,
Raj Rajkumar <raj@....cmu.edu>, dionisio@....cmu.edu
Subject: Re: RFC for a new Scheduling policy/class in the Linux-kernel
> I still conceptually prefer the idea of granting locks to
> contending tasks in priority order, of course. It is just a
> question of whether you want to have to agree (1) that all
> scheduling is based on priority, and (2) pay the price for either
> (2a) dynamically re-ordering all those queues every time a task
> gains or loses priority (due to inheritance, or whatever), or (2b)
> pay the O(n) price of scanning the queue for the currently
> highest-priority task when you grant the lock. If you go this
> way, I would favor the latter. In any system that does not
> already have poor performance due to excessive lock contention,
> the queues should rarely have more than one member. Assuming
> integrity of the queue is maintained by the corresponding lock
> itself, it is much easier to do this scanning at the point the
> lock is released than to support (asynchronous) queue reordering
> for every potential priority change.
>
Just chiming in that from an implementation perspective, we could use
a priority bitmap of active tasks contending for the lock. An
implementation similar to the one used by the O(1) scheduler can be of
great use here. Hardware support like "find_first_bit" can drastically
reduce the time taken to search for the highest-priority task pending
on the lock. Given realistic values for the number of distinct
priority values required by most practical systems, such an
implementation could prove effective.
Thanks,
Karthik
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists