lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <46235447.3080000@free.fr>
Date:	Mon, 16 Apr 2007 12:47:35 +0200
From:	John <linux.kernel@...e.fr>
To:	linux-kernel@...r.kernel.org
CC:	linux.kernel@...e.fr
Subject: Disabling x86 System Management Mode

Hello everyone,

According to Wikipedia:
http://en.wikipedia.org/wiki/Non-Maskable_Interrupt
http://en.wikipedia.org/wiki/System_Management_Mode

"SMM is an operating mode of the Intel 386SL and later microprocessor in 
which all normal execution (including the operating system) is 
suspended, and special separate software (usually firmware or a 
hardware-assisted debugger) is executed in high-privilege mode.

Operations in SMM take CPU time away from the OS, since the CPU state 
must be stored to memory (SMRAM) and any write back caches must be 
flushed. This can destroy real-time behavior and cause clock ticks to 
get lost."

AFAIU, even a hard real-time OS is "defenseless" against SMIs that
kick the CPU into SMM.

I'm planning on writing a few line of code to gather indirect evidence
that the CPU is periodically entering SMM.

I was considering a kernel module along the lines of...

for (i=0; i < N; ++i)
{
   schedule_timeout(1); /* sleep at least one jiffy */
   disable interrupts
   unsigned cycles = foo();
   /* update stats with cycles */
   enable interrupts
}

foo is a loop full of NOPs.
It takes up only a few lines in L1 cache.
The conditional jump is easy to predict.
There is a serializing instruction before and after.
As a result, foo's latency should be very consistent.
foo returns the number of cycles it ran. On the systems I have to work 
with, foo typically runs in ~800 microseconds.
I suppose disabling interrupts that long is bound to break something 
somewhere?

.globl foo
foo:
   push %ebx
   push %esi
   cpuid
   rdtsc
   mov MM, %ecx
   mov %eax, %esi
.align 16
.L1:
   nop
   nop
   ... (lots of nops)
   dec %ecx
   jnz .L1
.L2:
   cpuid
   rdtsc
   sub %esi, %eax
   pop %esi
   pop %ebx
   ret

If foo returns something out of the ordinary, even though interrupts 
were disabled, then it must have been interrupted by a non-maskable 
interrupt, probably a system management interrupt.

What do you think about this approach?
I'm open to comments and suggestions.

Regards.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ