lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 31 Jan 2010 21:44:08 +0100
From:	Jens Axboe <>
To:	Hitoshi Mitake <>
Cc:	Ingo Molnar <>,,
	Peter Zijlstra <>,
	Paul Mackerras <>,
	Tom Zanussi <>,
	Steven Rostedt <>,
	Thomas Gleixner <>,
	Greg Kroah-Hartman <>
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for
	analyzing  lock statistics

On Sat, Jan 30 2010, Hitoshi Mitake wrote:
> (2010???01???29??? 23:34), Jens Axboe wrote:
>> On Fri, Jan 22 2010, Hitoshi Mitake wrote:
>>> Adding new subcommand "perf lock" to perf.
>>> I made this patch series on
>>> latest perf/core of tip (ef12a141306c90336a3a10d40213ecd98624d274),
>>> so please apply this series to perf/core.
>> [snip]
>> I wanted to give this a go today, since I think it's pretty nifty and a
>> lot better than using /proc/lock_stat. But it basically spirals the
>> system into death [1]. How big a system did you test this on?
>> [1] Got this: [  117.097918] hrtimer: interrupt took 35093901 ns
> I tested this on Core i7 965 + 3GB DRAM machine.
> Test program is mainly "perf bench sched messaging".
> Could you tell me the detail of your test situation?

I tried to run it on a 64 thread box, on a fio job that was driving 80
disks. It was just a quick test, but after ~20 seconds it had not even
gotten started yet, it was still stuck in setting up the jobs and
traversing sysfs for finding disk stats, etc. I can try something
lighter to see if it's the cpu count or the tough job that was making it
spiral into (near) death.

Jens Axboe

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists