[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53853604.50500@huawei.com>
Date: Wed, 28 May 2014 09:04:04 +0800
From: Libo Chen <libo.chen@...wei.com>
To: Mike Galbraith <umgwanakikbuti@...il.com>
CC: <tglx@...utronix.de>, <mingo@...e.hu>,
LKML <linux-kernel@...r.kernel.org>,
Greg KH <gregkh@...uxfoundation.org>,
"Li Zefan" <lizefan@...wei.com>, <peterz@...radead.org>,
Huang Qiang <h.huangqiang@...wei.com>
Subject: Re: balance storm
On 2014/5/27 21:20, Mike Galbraith wrote:
> On Tue, 2014-05-27 at 20:50 +0800, Libo Chen wrote:
>
>> in my box:
>>
>> perf top -g --sort=symbol
>>
>> Events: 3K cycles
>> 73.27% [k] read_hpet
>> 4.30% [k] _raw_spin_lock_irqsave
>> 1.88% [k] __schedule
>> 1.00% [k] idle_cpu
>> 0.91% [k] native_write_msr_safe
>> 0.68% [k] select_task_rq_fair
>> 0.51% [k] module_get_kallsym
>> 0.49% [.] sem_post
>> 0.44% [.] main
>> 0.41% [k] menu_select
>> 0.39% [k] _raw_spin_lock
>> 0.38% [k] __switch_to
>> 0.33% [k] _raw_spin_lock_irq
>> 0.32% [k] format_decode
>> 0.29% [.] usleep
>> 0.28% [.] symbols__insert
>> 0.27% [k] tick_nohz_stop_sched_tick
>> 0.27% [k] update_stats_wait_end
>> 0.26% [k] apic_timer_interrupt
>> 0.25% [k] enqueue_entity
>> 0.25% [k] sched_clock_local
>> 0.24% [k] _raw_spin_unlock_irqrestore
>> 0.24% [k] select_idle_sibling
>
> read_hpet? Are you booting box notsc or something? Migration cost is
> the least of your worries.
oh yes, no tsc only hpet in my box. I don't know hhy is read_hpet is hot.
but when I bind 3-th tasks to percpu,cost will be rapid decline, yet perf
shows read_hpet is still hot.
after bind
Events: 561K cycles
64.18% [kernel] [k] read_hpet
5.51% usleep [.] main
2.71% [kernel] [k] __schedule
1.82% [kernel] [k] _raw_spin_lock_irqsave
1.56% libc-2.11.3.so [.] usleep
1.07% [kernel] [k] apic_timer_interrupt
0.89% libc-2.11.3.so [.] __GI___libc_nanosleep
0.82% [kernel] [k] native_write_msr_safe
0.82% [kernel] [k] ktime_get
0.71% [kernel] [k] trace_hardirqs_off
0.63% [kernel] [k] __switch_to
0.60% [kernel] [k] _raw_spin_unlock_irqrestore
0.47% [kernel] [k] menu_select
0.46% [kernel] [k] _raw_spin_lock
0.45% [kernel] [k] enqueue_entity
0.45% [kernel] [k] sched_clock_local
0.43% [kernel] [k] try_to_wake_up
0.42% [kernel] [k] hrtimer_nanosleep
0.36% [kernel] [k] do_nanosleep
0.35% [kernel] [k] _raw_spin_lock_irq
0.34% [kernel] [k] rb_insert_color
0.29% [kernel] [k] update_curr
0.29% [kernel] [k] native_sched_clock
0.28% [kernel] [k] hrtimer_interrupt
0.28% [kernel] [k] rcu_idle_exit_common
0.27% [kernel] [k] hrtimer_init
0.27% [kernel] [k] __hrtimer_start_range_ns
0.26% [kernel] [k] __rb_erase_color
0.26% [kernel] [k] lock_hrtimer_base
0.25% [kernel] [k] trace_hardirqs_on
0.23% [kernel] [k] rcu_idle_enter_common
0.23% [kernel] [k] cpuidle_idle_call
0.23% [kernel] [k] finish_task_switch
0.22% [kernel] [k] set_next_entity
0.22% [kernel] [k] cpuacct_charge
0.22% [kernel] [k] pick_next_task_fair
0.21% [kernel] [k] sys_nanosleep
0.20% [kernel] [k] rb_next
0.20% [kernel] [k] start_critical_timings
>
> -Mike
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists