lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3efb10970803041222j61fd8d28k5cef0994b9bd6f12@mail.gmail.com>
Date:	Tue, 4 Mar 2008 21:22:24 +0100
From:	"Remy Bohmer" <linux@...mer.net>
To:	"David Brownell" <david-b@...bell.net>,
	"Bosko Radivojevic" <bosko.radivojevic@...il.com>
Cc:	lkml <linux-kernel@...r.kernel.org>,
	linux-rt-users@...r.kernel.org, tglx@...utronix.de
Subject: Re: High resolution timers on AT91SAM926x

Hello Bosko/Dave,

> On Monday 03 March 2008, Bosko Radivojevic wrote:
>
> > I had CONFIG_ATMEL_TCLIB enabled, but not TCB_CLKSRC and
>  > TCB_CLKSRC_BLOCK=0. With all those options, I finally have HRT
>  > functionality. But, strange thing is that jitter of my little example
>  > (get time, sleep 1ms, get time, show the difference) is around 250us.
>  > Maybe this is normal for this architecture?
> I have no idea why that would be.  Maybe you can find out.  :)

These are normal figures for this core, on preempt-rt.
You are talking about jitter on timers. While on preempt-rt the worst
case latency of scheduling a RT-thread is about 300us, 250us is thus
quite normal, and actually quite good... (the kernel-mainline average
latency is better, but worst-case is unbound)
(Note: The 300 us seem to be caused by something in the networking
layer, without network I noticed worst case latencies about 150us, but
NO guarantee here)

But, besides the worst-case schedule latency; the interrupt handler is
shared with the handler of the DBGU. There are patches available to
split the interrupt handler, this will make this behavior somewhat
better, but do not expect miracles here.

I stopped using HRT on these cores and preempt-RT for quite some time
now (several months), because the smaller timer granularity will allow
applications to make timers elapse on sub-millisec intervals, where
each interval generates a interrupt handling cycle of about 50 -
>100us, especially if different applications use periodic timers which
are out of sync.
This will become visible as extreme CPU-load figures on applications
with 1 ms timers.
13-15% CPU load on a application with just 1 single 1 ms timer is
quite normal...

FWIW: For getting the most accurate realtime response for periodic
timers, I used a TC block to generate a periodic interrupt, that only
needs to wake up a RT-thread. This way the OS-timer framework can be
used for non-RT stuff, and/or slow timers. This is much less heavy on
this core, and gives much better/deterministic RT results.
(This is not what I prefer to do, but these cores are just not that
fast as multi-gigahertz X86 PCs ;-)) )


Kind Regards,

Remy

>
>
>
>  > System is (as noted on rt.wik site) very slow with NO_HZ option enabled.
>
>
> It doesn't exactly say *why* or compare to other arm926ejs
>  chips ...
>
>  Could the 64-bit math (ktime stuff) be a factor there?  Your
>  board probably only runs at 95 (or-so) bogomips after all.
>
>  And is that just a NO_HZ issue, or generically an issue when
>  oneshot timer modes are in heavy use?
>
>  - Dave
>
>
>  --
>  To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>
> the body of a message to majordomo@...r.kernel.org
>  More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> Please read the FAQ at  http://www.tux.org/lkml/
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ