[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100330110124.GA2566@dhcp-lab-161.englab.brq.redhat.com>
Date: Tue, 30 Mar 2010 13:01:24 +0200
From: Stanislaw Gruszka <sgruszka@...hat.com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Americo Wang <xiyou.wangcong@...il.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Hidetoshi Seto <seto.hidetoshi@...fujitsu.com>,
Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
Roland McGrath <roland@...hat.com>,
Spencer Candland <spencer@...ehost.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH -mm 2/4] cputimers: make sure thread_group_cputime()
can't count the same thread twice lockless
On Mon, Mar 29, 2010 at 08:13:29PM +0200, Oleg Nesterov wrote:
> - change __exit_signal() to do __unhash_process() before we accumulate
> the counters in ->signal
>
> - add a couple of barriers into thread_group_cputime() and __exit_signal()
> to make sure thread_group_cputime() can never account the same thread
> twice if it races with exit.
>
> If any thread T was already accounted in ->signal, next_thread() or
> pid_alive() must see the result of __unhash_process(T).
Memory barriers prevents to account times twice, but as we write
sig->{u,s}time and sig->sum_sched_runtime on one cpu and read them
on another cpu, without a lock, this patch make theoretically possible
that some accumulated values of struct task_cputime may include exited
task values and some not. In example times->utime will include values
from 10 threads, and times->{stime,sum_exec_runtime} values form 9
threads, because local cpu updates sig->utime but not two other values.
This can make scaling in thread_group_times() not correct. I'm not sure
how big drawback is that.
Stanislaw
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists