lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 23 Jan 2015 15:39:21 -0800
From:	Jason Low <jason.low2@...com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Ingo Molnar <mingo@...nel.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Oleg Nesterov <oleg@...hat.com>,
	Mike Galbraith <umgwanakikbuti@...il.com>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Scott J Norton <scott.norton@...com>,
	Chegu Vinod <chegu_vinod@...com>,
	Aswin Chandramouleeswaran <aswin@...com>,
	linux-kernel@...r.kernel.org, jason.low2@...com
Subject: Re: [RFC PATCH] sched, timer: Use atomics for thread_group_cputimer
 stats

On Fri, 2015-01-23 at 21:08 +0100, Peter Zijlstra wrote:
> On Fri, Jan 23, 2015 at 11:23:36AM -0800, Jason Low wrote:
> > On Fri, 2015-01-23 at 10:25 +0100, Peter Zijlstra wrote:
> > > On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
> > > > +static void update_gt_cputime(struct thread_group_cputimer *a, struct task_cputime *b)
> > > >  {
> > > > +	if (b->utime > atomic64_read(&a->utime))
> > > > +		atomic64_set(&a->utime, b->utime);
> > > >  
> > > > +	if (b->stime > atomic64_read(&a->stime))
> > > > +		atomic64_set(&a->stime, b->stime);
> > > >  
> > > > +	if (b->sum_exec_runtime > atomic64_read(&a->sum_exec_runtime))
> > > > +		atomic64_set(&a->sum_exec_runtime, b->sum_exec_runtime);
> > > >  }
> > > 
> > > See something like this is not safe against concurrent adds.
> > 
> > How about something like:
> > 
> > u64 a_utime, a_stime, a_sum_exec_runtime;
> > 
> > retry_utime:
> > 	a_utime = atomic64_read(&a->utime);
> > 	if (b->utime > a_utime) {
> > 		if (atomic64_cmpxchg(&a->utime, a_utime, b->utime) != a_utime)
> > 			goto retry_utime;
> > 	}
> > 
> > retry_stime:
> > 	a_stime = atomic64_read(&a->stime);
> > 	if (b->stime > a_stime) {
> > 		if (atomic64_cmpxchg(&a->stime, a_stime, b->stime) != a_stime)
> > 			goto retry_stime;
> > 	}
> > 
> > retry_sum_exec_runtime:
> > 	a_sum_exec_runtime = atomic64_read(&a->sum_exec_runtime);
> > 	if (b->sum_exec_runtime > a_sum_exec_runtime) {
> > 		if (atomic64_cmpxchg(&a->sum_exec_runtime, a_sum_exec_runtime,
> > 				     b->sum_exec_runtime) != a_sum_exec_runtime)
> > 			goto retry_sum_exec_runtime;
> > 	}
> 
> Disgusting, at least use an inline or macro to avoid repeating it :-)
> 
> Also, does anyone care about performance on 32bit systems? There's a few
> where atomic64 is abysmal.

Yeah, though we're also avoiding spin lock/unlock calls each time, so
not sure if we're really adding anything of significance to the "overall
cost" on 32 bit systems. And update_gt_cputime wouldn't get called too
frequently.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ