[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4AB673C1.7020909@gmail.com>
Date: Sun, 20 Sep 2009 12:26:09 -0600
From: Robert Hancock <hancockrwd@...il.com>
To: Junhee Lee <junhee@...sys.kaist.ac.kr>
CC: linux-kernel@...r.kernel.org
Subject: Re: microsecond event scheduling in an application
On 09/08/2009 08:27 AM, Junhee Lee wrote:
> I am working on event scheduler which handles events in microsecond level.
> Actual this program is a network emulator using simulation codes.
> I'd like to expect that network emulator is working as simulation behaviors.
> Thus high resolution timer interrupt is required.
> But high resolution timer interrupt derived by high tick frequency (jiffies
> clock) must effect the system performance.
> Are there any comments or ways to support microsecond event scheduling
> without performance degradation?
Just increasing HZ will degrade performance, yes, but we have hrtimers
now which should be able to use granularities smaller than one jiffy, so
it shouldn't be needed..
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists