[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160130203602.GA7856@lerouge>
Date: Sat, 30 Jan 2016 21:36:05 +0100
From: Frederic Weisbecker <fweisbec@...il.com>
To: Mike Galbraith <umgwanakikbuti@...il.com>
Cc: Rik van Riel <riel@...hat.com>, linux-kernel@...r.kernel.org,
tglx@...utronix.de, mingo@...nel.org, peterz@...radead.org,
luto@...capital.net, Clark Williams <williams@...hat.com>
Subject: Re: [PATCH 2/2] sched,time: call __acct_update_integrals once a jiffy
On Sat, Jan 30, 2016 at 06:53:05PM +0100, Mike Galbraith wrote:
> On Sat, 2016-01-30 at 15:20 +0100, Frederic Weisbecker wrote:
> > On Fri, Jan 29, 2016 at 05:43:28PM -0500, Rik van Riel wrote:
>
> > > Run times for the microbenchmark:
> > >
> > > 4.4 3.8 seconds
> > > 4.5-rc1 3.7 seconds
> > > 4.5-rc1 + first patch 3.3 seconds
> > > 4.5-rc1 + both patches 2.3 seconds
> >
> > Very nice improvement!
>
> Tasty indeed.
>
> When nohz_full CPUs are not isolated, ie are being used as generic
> CPUs, get_nohz_timer_target() is a problem with things like tbench.
So by isolated CPU you mean those part of isolcpus= boot option, right?
>
> tbench 8 with Rik's patches applied:
> nohz_full=empty
> Throughput 3204.69 MB/sec 1.000
> nohz_full=1-3,5-7
> Throughput 1354.99 MB/sec .422 1.000
> nohz_full=1-3,5-7 + club below
> Throughput 2762.22 MB/sec .861 2.038
>
> With Rik's patches and a club, tbench becomes nearly acceptable.
> ---
> include/linux/tick.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> --- a/include/linux/tick.h
> +++ b/include/linux/tick.h
> @@ -184,7 +184,7 @@ static inline const struct cpumask *hous
> static inline bool is_housekeeping_cpu(int cpu)
> {
> #ifdef CONFIG_NO_HZ_FULL
> - if (tick_nohz_full_enabled())
> + if (tick_nohz_full_enabled() && runqueue_is_isolated(cpu))
> return cpumask_test_cpu(cpu, housekeeping_mask);
This makes me confused. How forcing timers to CPUs in isolcpus is making
better results?
> #endif
> return true;
Powered by blists - more mailing lists