lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1254939040.7523.9.camel@marge.simson.net>
Date:	Wed, 07 Oct 2009 20:10:40 +0200
From:	Mike Galbraith <efault@....de>
To:	Frans Pop <elendil@...net.nl>
Cc:	Arjan van de Ven <arjan@...radead.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <peterz@...radead.org>,
	linux-wireless@...r.kernel.org
Subject: Re: [.32-rc3] scheduler: iwlagn consistently high in "waiting for
 CPU"

On Wed, 2009-10-07 at 19:10 +0200, Frans Pop wrote:
> On Tuesday 06 October 2009, Frans Pop wrote:
> > I've checked for 2.6.31.1 now and iwlagn is listed high there too when
> > the system is idle, but with normal values of 60-100 ms. And phy0 has
> > normal values of below 10 ms.
> > I've now rebooted with today's mainline git; phy0 now frequently shows
> > with values of around 100 ms too (i.e. higher than last time).
> >
> > Both still go way down as soon as the system is given work to do.
> >
> > With a 5 second sleep I was unable to get any significant latencies (I
> > started perf on a latencytop refresh and did a manual refresh as it
> > finished to see what happened during the perf run). The perf run does
> > seem to affect the latencies.
> > I've uploaded a chart for a 10s sleep during which I got latencies of
> > 101ms for iwlagn and 77ms for phy0:
> > http://people.debian.org/~fjp/tmp/kernel/.
> 
> Mike privately sent me a script to try to capture the latencies with perf,
> but the perf output does not show any high latencies at all. It looks as if
> we may have found a bug in latencytop here instead.

Maybe.  I have a little perturbation measurement proggy which I just
fired up to verify both perf and latencytop's numbers here.  It's a dirt
simply cycle counter tool, which calibrates itself, sums perturbations
over a period of time and emit stats.  Here, all three are in violent
agreement wrt how long "pert" is waiting for cpu.

	-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ