lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BBFA0C6.2610.2C4FFE@Frantisek.Rysanek.post.cz>
Date:	Fri, 09 Apr 2010 21:48:54 +0200
From:	"Frantisek Rysanek" <Frantisek.Rysanek@...t.cz>
To:	linux-kernel@...r.kernel.org
Subject: setitimer vs. threads: SIGALRM returned to which thread? (process master or individual child)

Dear everyone,

I hope I'm not way too much off topic in this list... specifically, I 
hope the issue takes place in the kernel, as opposed to the user-
space part of NPTL that ships with libc, distroes etc.
At the same time, I feel shame for asking this noob question in the 
very LKML - except that there doesn't seem to be a better place to 
ask... :->

Some years ago, I've written a couple programs that tend to use the 
setitimer() syscall in a threaded environment, making use of its 
special property at the time: setitimer() had per-thread granularity. 
It used to deliver a SIGALRM from the timer to the particular thread 
that called setitimer(). I believe that was around RH8 to Fedora 5.

Recently I've recompiled the programs on a newer distro (Fedora 10) 
and voila: setitimer() now yields a SIGALRM to the program's master 
thread, no matter what child thread called setitimer()...

Based on further reading, I assume this is related to making the NPTL 
more POSIX-compliant. The latter is a correct POSIX behavior, the 
former was not. See "man pthreads", and under the NPTL heading, 
find a note saying
"Threads do not share interval timers (fixed in kernel 2.6.12)."

Yes, it used to be quite a relief to have Linux do the management of 
timers for me. Now I have two options to choose from:
1) write my own "timer queueing" (timekeeping) code to order the 
timers for me in the master thread
2) find another function, similar to setitimer(), that would function 
the way setitimer() used to work in the old days...

Obviously option #2 is much easier for me to abuse :-)
Such as, does select() work in the desired per-thread way?
In the app that I'm trying to update right now, I have a serial 
device open per thread, and I need to detect character timeouts 
(frame breaks).
But I have other apps where I have a *myriad* of stand-alone timers, 
not related to a "file descriptor like" device of any kind, 
generating "spurious events" for me, used to propel a bunch of 
threads doing some polling on various dumb "networked" devices 
(external bus slaves)...

For a moment I was wondering how complex the relevant kernel patch 
was, how difficult it would be to revert it - but then again such a 
revert might disrupt various other pieces of user-space code in my 
distro, so it's probably not such a good idea anyway :-) Also, if I 
resort to patching my kernel, it makes my user-space code fairly non-
portable to other people's machines. Let alone the bulk of code 
evolution in Linux kernel timekeeping and process management since 
2.6.12, overlaying the original patch.
AIX appears to have ITMER_REAL_TH [sob]. Not that I'm going to try 
AIX for this particular reason :-)

Wouldn't it be in fact more straightforward and "cheaper" (in terms 
of processing overhead) to have the timers thread-aware? If I just 
call a setitimer() in each thread, that requires some number of 
ioctl() calls. Now if I need to do my own timekeeping (event 
queueing) in user space, I'll probably need to call getitimer() or 
gettimeofday() ahead of every setitimer(), every time a thread needs 
to set a timer. Not sure about the required number of pointer 
indirections in the kernel for either case :-)

I understand that POSIX compliance is a good thing, for portability 
reasons. At the same time, resorting to per-process granularity of 
timers somehow "feels backwards" - from thread awareness, back to the 
old "no threads" UNIX world. It seems to remind me of the occasional 
debate whether GCC extensions to standard C are a good thing to use, 
or whether they should be avoided...

I haven't found much debate about this "timers vs. threads 
granularity" point in mailinglist archives or on the web.
Any further hints/pointers/kicks in the right direction/recommended 
reading are welcome :-)
If you've read this far, thanks for your time...

Frank Rysanek

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ