[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0908280238290.2888@localhost.localdomain>
Date: Fri, 28 Aug 2009 02:44:21 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Chris Friesen <cfriesen@...tel.com>
cc: Christoph Lameter <cl@...ux-foundation.org>,
raz ben yehuda <raziebe@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>, mingo@...e.hu,
peterz@...radead.org, maximlevitsky@...il.com, efault@....de,
riel@...hat.com, wiseman@...s.biu.ac.il,
linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org
Subject: Re: RFC: THE OFFLINE SCHEDULER
On Thu, 27 Aug 2009, Chris Friesen wrote:
> On 08/27/2009 03:09 PM, Thomas Gleixner wrote:
>
> > That's just the wrong approach. All you need is a way to tell the
> > kernel that CPUx can switch off the scheduler tick when only one
> > thread is running and that very thread is running in user space. Once
> > another thread arrives on that CPU or the single thread enters the
> > kernel for a blocking syscall the scheduler tick has to be
> > restarted.
>
> That's an elegant approach...I like it.
>
> How would you deal with per-cpu kernel threads (softirqs, etc.) or
> softirq processing while in the kernel?
If you have pinned an interrupt to that CPU then you need to process
the softirq for it as well. If that's the device your very single user
space thread is talking to then you better want that, if you are not
interested then simply pin that device irq to some other CPU: no irq
-> no softirq.
> Switching off the timer tick isn't sufficient because the scheduler
> will be triggered on the way back to userspace in a syscall.
If there is just one user space thread why is the NOOP call to the
scheduler interesting ? If you go into the kernel you have some
overhead anyway, so why would the few instructions to call schedule()
and return with the same task (as it is the only runnable) matter ?
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists