lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 12 May 2011 18:57:56 +0800
From:	Yong Zhang <yong.zhang0@...il.com>
To:	Juri Lelli <juri.lelli@...il.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: lock_stat &rq->lock/1 class name meaning

On Wed, May 11, 2011 at 06:26:10PM +0200, Juri Lelli wrote:
> Hi,
> I'm trying to collect contention statistics through /proc/lock_stat
> about scheduler data structures.
> 
> What I obtain if a do "cat /proc/lock_stat" is something like:
> ...
> &rq->lock:         13128          13128           0.43
> 190.53    103881.26          97454        3453404           0.00
> 401.11    13224683.11
> ---------
> &rq->lock            645          [<ffffffff8103bfc4>]
> task_rq_lock+0x43/0x75
> &rq->lock            297          [<ffffffff8104ba65>]
> try_to_wake_up+0x127/0x25a
> &rq->lock            360          [<ffffffff8103c4c5>]
> select_task_rq_fair+0x1f0/0x74a
> &rq->lock            428          [<ffffffff81045f98>]
> scheduler_tick+0x46/0x1fb
> ---------
> &rq->lock             77          [<ffffffff8103bfc4>]
> task_rq_lock+0x43/0x75
> &rq->lock            174          [<ffffffff8104ba65>]
> try_to_wake_up+0x127/0x25a
> &rq->lock           4715          [<ffffffff8103ed4b>]
> double_rq_lock+0x42/0x54
> &rq->lock            893          [<ffffffff81340524>] schedule+0x157/0x7b8
> .
> .
> .
> ...
> &rq->lock/1:         11526          11488           0.33
> 388.73      136294.31          21461          38404           0.00
> 37.93      109388.53
> -----------
> &rq->lock/1          11526          [<ffffffff8103ed58>]
> double_rq_lock+0x4f/0x54
> -----------
> &rq->lock/1           5645          [<ffffffff8103ed4b>]
> double_rq_lock+0x42/0x54
> &rq->lock/1           1224          [<ffffffff81340524>]
> schedule+0x157/0x7b8
> &rq->lock/1           4336          [<ffffffff8103ed58>]
> double_rq_lock+0x4f/0x54
> &rq->lock/1            181          [<ffffffff8104ba65>]
> try_to_wake_up+0x127/0x25a
> 
> I guess the first one is about the per-rq (per-CPU) spinlock, but
> what about the second? What the "/1" stands for?

It is also rq but it's subclass is 1.

Take a look at raw_spin_lock_nested(&this_rq->lock, SINGLE_DEPTH_NESTING);
in _double_lock_balance()

> Since every rq has a different spinlock, does &rq->lock group
> numbers from all the runqueues?

Yup.

Thanks,
Yong

> 
> Thanks a lot,
> 	Juri
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ