lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 25 Jan 2007 14:19:03 -0800
From:	john stultz <johnstul@...ibm.com>
To:	John <shill@...e.fr>
Cc:	linux-kernel@...r.kernel.org, tglx@...esys.com, mingo@...e.hu,
	akpm@...l.org
Subject: Re: One-shot high-resolution POSIX timer periodically late

On Thu, 2007-01-25 at 10:59 +0100, John wrote:
> John Stultz wrote:
> > On Wed, 2007-01-24 at 10:41 +0100, John wrote:
> >> As you can see, the first diagnostic came at 472.410501014... Then
> >> another diagnostic almost exactly two seconds apart 9 times in a row!
> >>
> >> My process is the only SCHED_FIFO process on the system. There are no
> >> user-space processes with a higher priority. AFAICT, only a kernel
> >> thread could keep the CPU away from my app.
> >>
> >> Is there a periodic kernel thread that runs every 2 seconds, cannot be
> >> preempted, and runs for over 50 µs??
> > 
> > This sounds like a BIOS SMI issue. Can you reproduce this behavior on
> > different hardware?
[snip]
> Do PCs typically enter System Management Mode periodically?

Completely depends on the system hardware and BIOS.

> Is it possible to disable SMM?

Not usually. Sometimes some SMIs can be disabled via BIOS options.

> According to the Wikipedia article, even the kernel is unaware that
> the CPU has entered SMM, is that correct?

Yes. The kernel has no notification that the SMI occurred.

> I've run my tests on a Dell PC. IIRC, the BIOS options are very basic.
> 
> I'll also try another PC as you suggest.

Also do check the -rt tree as Ingo suggested. I mis-read your earlier
email and thought you were running it.


> +++++
> 
> On a related note, John, AFAIU, you wrote the GTOD infrastructure. In my 
> app, I need to call clock_gettime(CLOCK_MONOTONIC, ...) very often, i.e. 
> around 2000 times every second (when I receive a packet, and when I 
> re-send a packet). Is there a way to improve the overhead / latency of 
> these calls? I've heard about vsyscalls, are they relevant?
> 
> Do I need a specific glibc to use vsyscalls?

vsyscalls are only supported on x86_64/powerpc (I think) right now.
glibc support is necessary, and right now its only for gettimeofday()
not clock_gettime().

> If I call clock_gettime(CLOCK_MONOTONIC, &spec) twice in a row, then 
> subtract the two timespecs, I get ~1400 ns on a 2.8 GHz P4. AFAIU, my 
> clock source is acpi_pm. I tried setting it to tsc but it made hell 
> break loose.
> 
> http://groups.google.com/group/fa.linux.kernel/msg/a095241d49adfc44?dmode=source
> 
> Apparently, 1400 ns is similar to what you observe with acpi_pm.
> I suppose I'll need to use the TSC if I want any improvement?

Yea, unfortunately the ACPI PM is much slower then the TSC.
However, the TSC is often unstable, so you might not be able to utilize
it.

You can play around w/ the "idle=poll" boot option to see if that allows
you to use it, but that will up your power usage and has possible
thermal risks.

thanks
-john


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ