[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090920203048.7b1b5aa2@infradead.org>
Date:	Sun, 20 Sep 2009 20:30:48 +0200
From:	Arjan van de Ven <arjan@...radead.org>
To:	Robert Hancock <hancockrwd@...il.com>
Cc:	Junhee Lee <junhee@...sys.kaist.ac.kr>,
	linux-kernel@...r.kernel.org
Subject: Re: microsecond event scheduling in an application
On Sun, 20 Sep 2009 12:26:09 -0600
Robert Hancock <hancockrwd@...il.com> wrote:
> On 09/08/2009 08:27 AM, Junhee Lee wrote:
> > I am working on event scheduler which handles events in microsecond
> > level. Actual this program is a network emulator using simulation
> > codes. I'd like to expect that network emulator is working as
> > simulation behaviors. Thus high resolution timer interrupt is
> > required. But high resolution timer interrupt derived by high tick
> > frequency (jiffies clock) must effect the system performance.
> > Are there any comments or ways to support microsecond event
> > scheduling without performance degradation?
> 
> Just increasing HZ will degrade performance, yes, but we have
> hrtimers now which should be able to use granularities smaller than
> one jiffy, so it shouldn't be needed..
select/poll use hrtimers, which are jiffies independent....
you'll be hard pressed to notice jiffies granularity in userspace
nowadays..
-- 
Arjan van de Ven 	Intel Open Source Technology Centre
For development, discussion and tips for power savings, 
visit http://www.lesswatts.org
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Powered by blists - more mailing lists
 
