lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 15 Aug 2014 07:19:31 +0200
From:	Mike Galbraith <umgwanakikbuti@...il.com>
To:	Oleg Nesterov <oleg@...hat.com>
Cc:	Rik van Riel <riel@...hat.com>, linux-kernel@...r.kernel.org,
	Peter Zijlstra <peterz@...radead.org>,
	Hidetoshi Seto <seto.hidetoshi@...fujitsu.com>,
	Frank Mayhar <fmayhar@...gle.com>,
	Frederic Weisbecker <fweisbec@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Sanjay Rao <srao@...hat.com>,
	Larry Woodman <lwoodman@...hat.com>
Subject: Re: [PATCH RFC] time,signal: protect resource use statistics with
 seqlock

On Thu, 2014-08-14 at 19:48 +0200, Oleg Nesterov wrote: 
> On 08/14, Oleg Nesterov wrote:
> >
> > OK, lets forget about alternative approach for now. We can reconsider
> > it later. At least I have to admit that seqlock is more straighforward.
> 
> Yes.
> 
> But just for record, the "lockless" version doesn't look that bad to me,
> 
> 	void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times)
> 	{
> 		struct signal_struct *sig = tsk->signal;
> 		bool lockless, is_dead;
> 		struct task_struct *t;
> 		unsigned long flags;
> 		u64 exec;
> 
> 		lockless = true;
> 		is_dead = !lock_task_sighand(p, &flags);
> 	 retry:
> 		times->utime = sig->utime;
> 		times->stime = sig->stime;
> 		times->sum_exec_runtime = exec = sig->sum_sched_runtime;
> 		if (is_dead)
> 			return;
> 
> 		if (lockless)
> 			unlock_task_sighand(p, &flags);
> 
> 		rcu_read_lock();
> 		for_each_thread(tsk, t) {
> 			cputime_t utime, stime;
> 			task_cputime(t, &utime, &stime);
> 			times->utime += utime;
> 			times->stime += stime;
> 			times->sum_exec_runtime += task_sched_runtime(t);
> 		}
> 		rcu_read_unlock();
> 
> 		if (lockless) {
> 			lockless = false;
> 			is_dead = !lock_task_sighand(p, &flags);
> 			if (is_dead || exec != sig->sum_sched_runtime)
> 				goto retry;
> 		}
> 		unlock_task_sighand(p, &flags);
> 	}
> 
> The obvious problem is that we should shift lock_task_sighand() from the
> callers to thread_group_cputime() first, or add thread_group_cputime_lockless()
> and change the current users one by one.
> 
> And of course, stats_lock is more generic.

Yours looks nice to me, particularly in that it doesn't munge structure
layout, could perhaps be backported to fix up production kernels.

For the N threads doing this on N cores case, seems rq->lock hammering
will still be a source of major box wide pain.  Is there any correctness
reason to add up unaccounted ->on_cpu beans, or is that just value
added?  Seems to me it can't matter, as you traverse, what you added up
on previous threads becomes ever more stale as you proceed, so big boxen
would be better off not doing that.

-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ