[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1429107823.6795.18.camel@stgolabs.net>
Date: Wed, 15 Apr 2015 07:23:43 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: Jason Low <jason.low2@...com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>,
Mike Galbraith <umgwanakikbuti@...il.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Mel Gorman <mgorman@...e.de>,
Steven Rostedt <rostedt@...dmis.org>,
Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
hideaki.kimura@...com, Aswin Chandramouleeswaran <aswin@...com>,
Scott J Norton <scott.norton@...com>
Subject: Re: [PATCH 2/3] sched, timer: Use atomics for thread_group_cputimer
to improve scalability
On Tue, 2015-04-14 at 16:09 -0700, Jason Low wrote:
> While running a database workload, we found a scalability issue with itimers.
>
> Much of the problem was caused by the thread_group_cputimer spinlock.
> Each time we account for group system/user time, we need to obtain a
> thread_group_cputimer's spinlock to update the timers. On larger systems
> (such as a 16 socket machine), this caused more than 30% of total time
> spent trying to obtain this kernel lock to update these group timer stats.
>
> This patch converts the timers to 64 bit atomic variables and use
> atomic add to update them without a lock. With this patch, the percent
> of total time spent updating thread group cputimer timers was reduced
> from 30% down to less than 1%.
What does 30% less time spent dealing with the thread_group_cputimer's
spinlock buy us? iow, does this help DB benchmark throughput or such?
Thanks,
Davidlohr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists