[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081117221857.GA29423@redhat.com>
Date: Mon, 17 Nov 2008 23:18:57 +0100
From: Oleg Nesterov <oleg@...hat.com>
To: Roland McGrath <roland@...hat.com>
Cc: Frank Mayhar <fmayhar@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Christoph Lameter <cl@...ux-foundation.org>,
Doug Chapman <doug.chapman@...com>, mingo@...e.hu,
adobriyan@...il.com, akpm@...ux-foundation.org,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: regression introduced by - timers: fix itimer/many thread hang
On 11/17, Roland McGrath wrote:
>
> > > - if (!->signal)
> > > + if (->exit_state)
> > > return;
> >
> > Yes, unless I missed something again, this should work. I'll send
> > the (simple) patches soon, but I have no idea how to test them.
>
> That certainly will exclude the problem of crashing in the tick interrupt
> after exit_notify. Unfortunately, it's moving in an undesireable direction
> for the long run. That is, it loses from the accounting even more of the
> CPU time spent on the exit path.
Yes, I thought about this too.
But please note that currently this already happens for sub-threads (and
if we protect ->signal with rcu too), the exiting sub-thread does not
contribute to accounting after release_task(self). Also, when the last
thread exits the process can be reaped by its parent, but after that
the threads can still use CPU.
IOW, when ->exit_signal != 0 we already sent the notification to parent
with utime/stime, the parent can reap current at any moment before it
does the final schedule. I don't think we can do something here.
But if we make ->signal refcountable, we can improve the case with the
exiting subthreads at least.
(Just in case, anyway I completeley agree, this hack (and unlock_wait)
should be killed in 2.6.29).
> > However, I'm afraid there is another problem. On 32 bit cpus we can't
> > read "u64 sum_exec_runtime" atomically, and so thread_group_cputime()
> > can "overestimate" ->sum_exec_runtime by (up to) UINT_MAX if it races
> > with the thread which updates its per_cpu_ptr(.totals). This for example
> > means that check_process_timers() can fire the CPUCLOCK_SCHED timers
> > before time.
> >
> > No?
>
> Yes, I think you're right. The best solution that comes to mind off hand
> is to protect the update/read of that u64 with a seqcount_t on 32-bit.
Oh, but we need them to be per-cpu, and both read and write need memory
barriers... Not that I argue, this will fix the problem of course, just
I don't know how this impacts the perfomance.
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists