[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a36005b50705011510k4cd7873dr3fb8206c0f107169@mail.gmail.com>
Date: Tue, 1 May 2007 15:10:40 -0700
From: "Ulrich Drepper" <drepper@...il.com>
To: "Bill Irwin" <bill.irwin@...cle.com>,
"Ulrich Drepper" <drepper@...il.com>,
"Andrew Morton" <akpm@...ux-foundation.org>,
"Eric Dumazet" <dada1@...mosbay.com>, linux-kernel@...r.kernel.org
Cc: wli@...omorphy.com
Subject: Re: per-thread rusage
On 5/1/07, Bill Irwin <bill.irwin@...cle.com> wrote:
> The basic
> idea is to try to do it similarly to how everyone else does so userspace
> (I suppose this would include glibc) don't have to bend over backward to
> accommodate it. Or basically to do what everyone expects.
I think beside RUSAGE_THREAD you'll find no precedence. It's all new,
you have to tread the path. The RUSAGE_THREAD interface is not
sufficient, actually. First, if a thread terminates we don't have to
keep it stick around until a wait call can be issued. We terminate
threads right away and the synchronization with waiters is done
independently. Seond, the thread ID (aka kernel process ID) is not
exported nor should it. This is easy to solve, though: introduce a
pthread_getrusage interface.
To solve the first problem the terminating thread should write out the
data before it is gone. Automatically. After registration. So, you
could have a syscall to register a structure in the user address space
which is filled with the data. If the data structure is the same as
rusage you're done. If you use a different data structure yo need to
introduce a getrusage-equivalent syscall.
With this infrastructure in place we could have
int pthread_getrusage(pthread_t, struct ruage *);
and
int pthread_join4(pthread_t, void ** valueptr, struct rusage *);
pthread_join4 is a joke, we need a better name, but you get the drift.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists