[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1401184553.5134.115.camel@marge.simpson.net>
Date: Tue, 27 May 2014 11:55:53 +0200
From: Mike Galbraith <umgwanakikbuti@...il.com>
To: Libo Chen <libo.chen@...wei.com>
Cc: tglx@...utronix.de, mingo@...e.hu,
LKML <linux-kernel@...r.kernel.org>,
Greg KH <gregkh@...uxfoundation.org>,
Li Zefan <lizefan@...wei.com>, peterz@...radead.org
Subject: Re: balance storm
On Tue, 2014-05-27 at 15:56 +0800, Libo Chen wrote:
> On 2014/5/26 22:19, Mike Galbraith wrote:
> > On Mon, 2014-05-26 at 20:16 +0800, Libo Chen wrote:
> >> On 2014/5/26 13:11, Mike Galbraith wrote:
> >
> >>> Your synthetic test is the absolute worst case scenario. There has to
> >>> be work between wakeups for select_idle_sibling() to have any chance
> >>> whatsoever of turning in a win. At 0 work, it becomes 100% overhead.
> >>
> >> not synthetic, it is a real problem in our product. under no load, waste
> >> much cpu time.
> >
> > What happens in your product if you apply the commit I pointed out?
>
> under no load, cpu usage is up to 60%, but the same apps cost 10% on
> susp sp1. The apps use a lot of timer.
Something is rotten. 3.14-rt contains that commit, I ran your test with
256 threads on 64 core box, saw ~4%.
Putting master/nopreempt config on box and doing the same test, box is
chewing up truckloads of CPU, but not from migrations.
perf top -g --sort=symbol
Samples: 7M of event 'cycles', Event count (approx.): 1316249172581
- 82.56% [k] _raw_spin_lock_irqsave ▒
- _raw_spin_lock_irqsave ▒
- 96.59% __nanosleep_nocancel ◆
100.00% __libc_start_main ▒
2.88% __poll ▒
0 ▒
+ 1.56% [k] native_write_msr_safe ▒
+ 1.21% [k] update_cfs_shares ▒
+ 0.92% [k] __schedule ▒
+ 0.88% [k] _raw_spin_lock ▒
+ 0.73% [k] update_cfs_rq_blocked_load ▒
+ 0.62% [k] idle_cpu ▒
+ 0.47% [.] usleep ▒
+ 0.41% [k] cpuidle_enter_state ▒
+ 0.37% [k] set_task_cpu
Oh, 256 * usleep(100) is not a great idea.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists