lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Sat, 31 Mar 2012 20:02:46 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Paul Rolland <rol@...be.net>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: Blocked in keyctl(...) ??? - RCU related ???

On Fri, Mar 16, 2012 at 09:43:28AM +0100, Paul Rolland wrote:
> Hello,
> 
> On Mon, 12 Mar 2012 17:03:01 +0100
> Paul Rolland (ポール・ロラン)  <rol@...be.net> wrote:
> 
> > Sometimes, my applications start "locking": nothing happens anymore, can't
> > kill them... 
> > I manage to strace them, to find that they are executing the following
> > syscall:
> > 
> > 13686 keyctl(0x1, 0, 0xffffffffffffffff, 0x7fa90fd6a2c0, 0
> > 
> > and that syscall never returns.
> 
> OK, this happened again, while running 3.3-rc7... I've used sysrq to
> collect some informations and what I can see from that is the following :
> 
> Mar 16 08:40:02 tux kernel: Call Trace:
> Mar 16 08:40:02 tux kernel: [<ffffffff813b8771>] ? extract_buf+0x91/0x140
> Mar 16 08:40:02 tux kernel: [<ffffffff8166fd5a>] schedule+0x3a/0x50
> Mar 16 08:40:02 tux kernel: [<ffffffff8166e09d>] schedule_timeout+0x1ed/0x260
> Mar 16 08:40:02 tux kernel: [<ffffffff810636c1>] ? get_parent_ip+0x11/0x50
> Mar 16 08:40:02 tux kernel: [<ffffffff810636c1>] ? get_parent_ip+0x11/0x50
> Mar 16 08:40:02 tux kernel: [<ffffffff810636c1>] ? get_parent_ip+0x11/0x50
> Mar 16 08:40:02 tux kernel: [<ffffffff8166f100>] wait_for_common+0xc0/0x160
> Mar 16 08:40:02 tux kernel: [<ffffffff81062150>] ? try_to_wake_up+0x2e0/0x2e0
> Mar 16 08:40:02 tux kernel: [<ffffffff810ad020>] ? __call_rcu+0x150/0x150
> Mar 16 08:40:02 tux kernel: [<ffffffff8166f1b8>] wait_for_completion+0x18/0x20
> Mar 16 08:40:02 tux kernel: [<ffffffff81054628>] wait_rcu_gp+0x48/0x50
> Mar 16 08:40:02 tux kernel: [<ffffffff81054630>] ? wait_rcu_gp+0x50/0x50
> Mar 16 08:40:02 tux kernel: [<ffffffff810ac09c>] synchronize_rcu+0x1c/0x20
> Mar 16 08:40:02 tux kernel: [<ffffffff812ba6a9>] install_session_keyring_to_cred+0x59/0xc0
> Mar 16 08:40:02 tux kernel: [<ffffffff812bac3a>] join_session_keyring+0x12a/0x170
> Mar 16 08:40:02 tux kernel: [<ffffffff812b8cd2>] keyctl_join_session_keyring+0x32/0x60
> Mar 16 08:40:02 tux kernel: [<ffffffff812ba278>] sys_keyctl+0x168/0x180
> Mar 16 08:40:02 tux kernel: [<ffffffff81671b62>]
> system_call_fastpath+0x16/0x1b
> 
> The complete traces collected in /var/log/messages is attached to this email.
> 
> Is there anything I can do to investigate this ?

>From your message output, it looks like something is stalling RCU grace
periods, possibly due to a task being preempted in an RCU read-side
critical section (this assumes you are running CONFIG_PREEMPT=y).

Could you please try reproducing with CONFIG_RCU_CPU_STALL_VERBOSE=y?
This would identify tasks stalling the RCU grace period.

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ