lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 10 Sep 2007 11:59:37 -0700
From:	"Pallipadi, Venkatesh" <venkatesh.pallipadi@...el.com>
To:	"Ingo Molnar" <mingo@...e.hu>, "Andi Kleen" <andi@...stfloor.org>
Cc:	"Thomas Gleixner" <tglx@...utronix.de>,
	"Andrew Morton" <akpm@...ux-foundation.org>,
	"linux-kernel" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] Track accurate idle time with tick_sched.idle_sleeptime

>-----Original Message-----
>From: Ingo Molnar [mailto:mingo@...e.hu] 
>Sent: Sunday, September 02, 2007 3:02 AM
>To: Andi Kleen
>Cc: Pallipadi, Venkatesh; Thomas Gleixner; Andrew Morton; linux-kernel
>Subject: Re: [PATCH] Track accurate idle time with 
>tick_sched.idle_sleeptime
>
>
>* Andi Kleen <andi@...stfloor.org> wrote:
>
>> > at least the current out-of-idle code already does what 
>amounts to a 
>> > PM-timer read when exiting from C2 or C3 mode. The
>> 
>> C2/C3 are already slow. I more worry about C1.
>
>C2/C3 are only slow in older CPUs - and they are getting faster and 
>faster. (also, newer systems do less of C1, due to increased energy 
>awareness.)
>
>would be nice to measure the overhead/impact i suspect.
>

Agree with latency concern out of C1. One option I was looking at was to
delay the timer update to do something like this
- C-state idle
- Break out due to interrupt
- Record TSC
- Handle interrupt
- irq_exit do a delayed update to time
  - Look at current TSC and previously recorded TSC and current time and
get the actual idle time

This way, we will not have latency for interrupt handling itself. But,
it is not so clean to do this as there are old systems where TSCs vary
with frequency and it is not a generic solution like ktime_get().

I had tried lmbench before sending out the patch and did not see
increased latency in scheduler or pipe related numbers. Any other
microbenchmark where I can possibly quantify the latency?

Also, this current patch does the accounting only for idle process. But,
going forward, we may need similar accounting even when we are executing
a process, so that scheduler can do the fine grained time slice
management and not account the interrupt handling time to the process.
We will have similar issue at a larger scale then.

Thanks,
Venki  
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ