[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <015f01ca3090$85e8a280$91b9e780$@kaist.ac.kr>
Date: Tue, 8 Sep 2009 23:27:40 +0900
From: "Junhee Lee" <junhee@...sys.kaist.ac.kr>
To: <linux-kernel@...r.kernel.org>
Subject: microsecond event scheduling in an application
I am working on event scheduler which handles events in microsecond level.
Actual this program is a network emulator using simulation codes.
I'd like to expect that network emulator is working as simulation behaviors.
Thus high resolution timer interrupt is required.
But high resolution timer interrupt derived by high tick frequency (jiffies
clock) must effect the system performance.
Are there any comments or ways to support microsecond event scheduling
without performance degradation?
Regards
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists