lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 23 Jan 2008 17:36:22 +0100
From:	Wolfgang Grandegger <wg@...ndegger.com>
To:	Luotao Fu <l.fu@...gutronix.de>
CC:	Steven Rostedt <rostedt@...dmis.org>,
	Wolfgang Grandegger <wg@...ndegger.com>,
	LKML <linux-kernel@...r.kernel.org>,
	RT <linux-rt-users@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: 2.6.24-rc8-rt1: Strange latencies on mpc5200 powerpc

Hi Fu,

Luotao Fu wrote:
> Hi folks,
> 
> On Thu, Jan 17, 2008 at 11:13:26AM +0100, Wolfgang Grandegger wrote:
>> It builds and runs fine on my Icecube-MPC5200 board, now also with the
>> latency tracer enabled. That's great. Still, "cyclictest -n -p80 -i1000"
>> reports latencies up to 400 us and therefore I tried to trigger and save
>> a high latency trace using:
>>
>>   # ./cyclictest -n -p80 -i1000 -b400
>>   1.21 0.33 0.11 4/42 1048
>>
>>   T: 0 (  914) P:80 I:1000 C:  38726 Min:     61 Act:  107 Avg:  106
>> Max:     377
>>   [   91.042169] (      cyclictest-914  |#0): new 39733427 us user-latency.
>>   bash-3.00# cat /proc/latency_trace > trace.log
>>
> 
> I was doing some tests on my mpc5200b Board to reproduce the high latency as
> measured by wolfgang.
> 
> I ran some tests with 
> while [ 1 ]; do ls /bin;done
> as non-rt workload, as in Wolfgangs Scenario.

I also did some more measurements and made, by chance, interesting
observations. I will summarize in more detail later on. Here are some
preliminary results. My high latencies of up to 570us (without latency
tracer) seem to be caused mainly by the following setting:

  CONFIG_RCU_TRACE=m

which is the default if CONFIG_MODULES=y. With CONFIG_RCU_TRACE=y
latencies go down significantly. I furthermore realized some bad impact
of CONFIG_NO_HZ and CONFIG_PPC_BESTCOMM_GEN_BD. With the following
settings the latencies did not yet exceed 140 us.

  CONFIG_PREEMPT_RCU_BOOST=y
  CONFIG_RCU_TRACE=y
  # CONFIG_PPC_BESTCOMM_GEN_BD is not set
  # CONFIG_NO_HZ is not set

With CONFIG_NO_HZ=y or CONFIG_PPC_BESTCOMM_GEN_BD=y the latency
increases by approx. 100..150us, each.

> Now I also got some strange values. My latency lies at round about 100 and the
> max. latency keep pending normally at about 150us-200us. However the max. value
> will occasionally break out to very high values. I got a max. about 850us after
> some rounds of measurement, which is definitively too high for the processor. I
> made some traces and attached the last "interesting" path to this mail.
> trace_600_1.log and trace_600_2.log are both taken with -b600. For comparation I
> also added a "normal" trace taken with -b150. In the traces with abnormal long
> latency there're a big "hole" between the last call, which is
> clockevents_program_event() in both long traces and the actual schedule()
> call. The holes are both about 600 us long, which is the main part of the
> latency actually.
> 
> Two important things I also noted during my tests: 
> 1. I got the unusual latencies on a system booted with nfsrootfs. I ran the same
> test scenario on system booted from flash and got no extraordinory results.
> After serveral hours test my max. latency lies at round about 200us.  
> 2. Even on a nfsrootfs system I could not get the high latencies if I run
> hackbench as non-rt workload.
> 
> Hence I suppose the unusual results are caused by network/Filesystemaccess.
> However I have no idea what could be the reason for the "hole"s in the trace.
> Looks almost like the cpu is doing nothing. As I don't have a trace on other
> architecture at hand at the moment. I can't say for 100 procent if the tracer is
> "missing" anything.
> 
> Any comments, ideas?

Could you check your .config and try the __good__ settings mentioned
above? Can you back my observations? Don't ask me why. Maybe somebody
else could shed some light on this.

Wolfgang,

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ