lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Sat, 9 Apr 2011 23:44:11 +0400
From:	"Nikita V. Youshchenko" <nyoushchenko@...sta.com>
To:	Oleg Nesterov <oleg@...hat.com>,
	Anders Ernevi <anders.ernevi@....semcon.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Alexander Kaliadin <akaliadin@...sta.com>,
	oishi.y@....yzk.co.jp, linux-kernel@...r.kernel.org
Subject: Re: Likely race between sys_rt_sigtimedwait() and complete_signal()

Hello Oleg and all.
Thanks to looking into this.

Unfotunately I was never able to reproduce the hang in question myself. It 
was only reproducible on customers side, and required hours of running to 
happen.

I can only ask Anders [CCed] to test your fix, if possible. But I'm not 
sure it is possible now, when problem is long closed by switching to 
timerfd-based periodic execution.

Se others comments below ...

> Can't find the original email, replying to Andrew's fwd.
>
> On 04/07, Andrew Morton wrote:
> > Within project we are working on, we are facing a "rare" situation
> > when setitimer() / sigwait() - based periodic task execution hangs.
> > "Rare" means once per several hours for 1000 Hz timer.
> >
> > For hanged thread, cat /proc/pid/status shows
> >
> > ...
> > State:	S (sleeping)
> > ...
> > SigPnd:	0000000000000000
> > ShdPnd:	0000000000002000
> > SigBlk:	0000000000000000
> > ...
> >
> > and SysRq - T shows
> >
> > [<c015b1b0>] (__schedule+0x2fc/0x37c) from [<c015b7b8>]
> > (schedule+0x1c/0x30)
> > [<c015b7b8>] (schedule+0x1c/0x30) from [<c015b8c4>]
> > (schedule_timeout+0x18/0x1dc)
> > [<c015b8c4>] (schedule_timeout+0x18/0x1dc) from [<c004a084>]
> > (sys_rt_sigtimedwait+0x1b4/0x288)
> > [<c004a084>] (sys_rt_sigtimedwait+0x1b4/0x288) from [<c001cf00>]
> > (ret_fast_syscall+0x0/0x28)
>
> Is this thread the group leader?

I don't know.
Anders, could you please answer?
"Group leader" is the main thread, that entered application's main() 
function on styartup.

> > All other threads have SIGALRM blocked as they should, looking
> > through /proc/X/status proves this.
>
> Do they ever had SIGALRM unlblocked ?

As far as I understand, all threads are created, and their signal masks 
set, at application startup - and the hang happens when application is 
already long running (and executed thousands, if not millions, of 
iterations successfully).

So, unless some libc or libGL routine plays hidden games with signal masks, 
SIGALRM should not become unblocked.

> > So for some reason, SIGALRM was successfully delivered by timer, bit
> > was set in ShdPnd [I guess at the bottom of __send_signal()], but that
> > still resulted somehow in thread going to schedule() and not waking.
>
> Thanks for the detailed report.
>
> There is an old, ancient problem which I constantly forget to fix.
> It _can_ perfectly explain the hang, at least in theory. I'll try
> to make the patch on Monday.
>
> In short: if a thread T runs with SIGALRM unblocked while another
> thread sleeps in sigtimedwait(), and then T blocks SIGALRM, the
> signal can be "lost" as above.
>
> Does your application do something like this? If not, then there
> is another problem.

As I've written above, I don't think this is the case - although I can't be 
100% sure.

> > This is on embedded system running vendor 2.6.31-based kernel, moving
> > forward is unfortunately impossible because of hardware support
> > issues.
>
> If I make the patch for 2.6.31, any chance you can test it?

See above. I can't. Maybe Anders can.

> > However I guess the race we faced still exists in the current upstream
> > kernel,
>
> Yes, this is possible. OTOH, the bug can be anywhere, not necessarily in
> signal.c, and it might be already fixed.

Well, I wrote original report just because I thought results of my analysis 
could be used by kernel developers to fix probably-still-existing issue 
that is hard to reproduce.

Thanks for looking into this anyway.

Nikita
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ