[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <37D7C6CF3E00A74B8858931C1DB2F07753787CCE@SHSMSX103.ccr.corp.intel.com>
Date: Fri, 18 Aug 2017 20:29:25 +0000
From: "Liang, Kan" <kan.liang@...el.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
CC: Mel Gorman <mgorman@...hsingularity.net>,
Mel Gorman <mgorman@...e.de>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
"Ingo Molnar" <mingo@...e.hu>, Andi Kleen <ak@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
linux-mm <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH 1/2] sched/wait: Break up long wake list walk
> >>
> >> That indicates that it may be a hot page and it's possible that the
> >> page is locked for a short time but waiters accumulate. What happens
> >> if you leave NUMA balancing enabled but disable THP?
> >
> > No, disabling THP doesn't help the case.
>
> Interesting. That particular code sequence should only be active for THP.
> What does the profile look like with THP disabled but with NUMA balancing
> still enabled?
Here is the profiling with THP disabled for wait_on_page_bit_common and
wake_up_page_bit.
The call stack of wait_on_page_bit_common
# Overhead Trace output
# ........ ..................
#
100.00% (ffffffff821aefca)
|
---wait_on_page_bit
__migration_entry_wait
migration_entry_wait
do_swap_page
__handle_mm_fault
handle_mm_fault
__do_page_fault
do_page_fault
page_fault
|
|--24.28%--_int_free
| |
| --24.15%--0
|
|--15.48%--0x2b788
| |
| --15.47%--0x127a2
| start_thread
|
|--13.54%--0x2b7b7
| |
| |--8.68%--0x127a2
| | start_thread
| |
| --4.86%--0x127d8
| start_thread
|
|--11.69%--0x123a2
| start_thread
|
|--6.30%--0x12205
| 0x1206d
| 0x11f85
| 0x11a05
| 0x10302
| |
| --6.27%--0xa8ee
| |
| --5.48%--0x3af5
| |
| --5.43%--__libc_start_main
|
|--5.24%--0x12352
| start_thread
|
|--3.56%--0x127bc
| |
| --3.55%--start_thread
|
|--3.06%--0x127a9
| start_thread
|
|--3.05%--0x127f2
| |
| --3.05%--start_thread
|
|--2.62%--0x127df
| start_thread
|
|--2.35%--0x1285e
| start_thread
|
|--1.86%--0x1284b
| start_thread
|
|--1.23%--0x12894
| start_thread
|
|--1.23%--0x12828
| start_thread
|
|--1.12%--0x1233c
| start_thread
|
|--1.02%--0x12881
| start_thread
|
|--0.99%--0x12773
| start_thread
|
--0.97%--0x12815
start_thread
The profile of wake_up_page_bit is still a 10 sec sample.
# Samples: 5K of event 'probe:wake_up_page_bit'
# Event count (approx.): 5645
#
# Overhead Trace output
# ........ ..................
#
100.00% (ffffffff821ad000)
|
---wake_up_page_bit
|
|--50.89%--do_wp_page
| __handle_mm_fault
| handle_mm_fault
| __do_page_fault
| do_page_fault
| page_fault
| |
| |--38.97%--_dl_fixup
| | |
| | |--16.88%--0x7f933d9f2e40
| | | 0
| | |
| | |--13.73%--0x7fb87a828e40
| | | 0
| | |
| | |--4.84%--0x7fed49202e40
| | | 0
| | |
| | |--0.87%--0x7fed491ffa50
| | | 0
| | |
| | |--0.73%--0x7f933d9efa50
| | | 0
| | |
| | --0.71%--0x7fed492024b0
| | 0
| |
| |--3.14%--_dl_fini
| | __run_exit_handlers
| | |
| | |--1.81%--0x7fb87994f2a0
| | | 0
| | |
| | |--0.71%--0x7fed483292a0
| | | 0
| | |
| | --0.62%--0x7f933cb192a0
| | 0
| |
| |--1.91%--0x6ad0
| | __run_exit_handlers
| | |
| | |--1.03%--0x7fb87994f2a0
| | | 0
| | |
| | --0.87%--0x7f933cb192a0
| | 0
| |
| |--1.52%--ped_disk_type_unregister
| | __run_exit_handlers
| | |
| | --1.06%--0x7fed483292a0
| | 0
| |
| |--1.06%--0xcd89
| | __run_exit_handlers
| | |
| | --0.51%--0x7fb87994f2a0
| | 0
| |
| |--1.05%--__offtime
| | 0
| |
| |--0.83%--0x45f9
| | __run_exit_handlers
| | |
| | --0.73%--0x7fed483292a0
| | 0
| |
| |--0.66%--0x10de8
| | __run_exit_handlers
| |
| --0.57%--0x3455
| __run_exit_handlers
|
|--45.85%--migrate_pages
| migrate_misplaced_page
| __handle_mm_fault
| handle_mm_fault
| __do_page_fault
| do_page_fault
| page_fault
| |
| |--12.21%--0x42f2
| | 0x11f77
| | 0x11a05
| | 0x10302
| | 0xa8ee
| | |
| | --9.44%--0x3af5
| | __libc_start_main
| |
| |--3.79%--_int_free
| | 0
| |
| |--2.69%--_dl_fini
| | __run_exit_handlers
| | |
| | |--1.17%--0x7f933cb192a0
| | | 0
| | |
| | |--0.85%--0x7fb87994f2a0
| | | 0
| | |
| | --0.67%--0x7fed483292a0
| | 0
| |
| |--2.57%--0x12205
| | 0x1206d
| | 0x11f85
| | 0x11a05
| | 0x10302
| | 0xa8ee
| | |
| | --1.98%--0x3af5
| | __libc_start_main
| |
| |--1.20%--_dl_fixup
| | |
| | |--0.71%--0x3af5
| | | __libc_start_main
| | |
| | --0.50%--_dl_fini
| | __run_exit_handlers
| |
| |--1.15%--do_lookup_x
| |
| |--0.99%--0xcc26
| | __run_exit_handlers
| |
| |--0.90%--ped_device_free_all
| | __run_exit_handlers
| |
| |--0.89%--__do_global_dtors_aux
| | __run_exit_handlers
| |
| |--0.89%--0x3448
| | __run_exit_handlers
| | |
| | --0.53%--0x7fb87994f2a0
| | 0
| |
| |--0.83%--0x25bc4
| | __run_exit_handlers
| |
| |--0.83%--check_match.9440
| | 0xae470
| |
| |--0.80%--0x30f0
| | __run_exit_handlers
| |
| |--0.73%--0x17a0
| | __run_exit_handlers
| |
| |--0.71%--0xcd60
| | __run_exit_handlers
| |
| |--0.71%--0x4754
| | __run_exit_handlers
| |
| |--0.69%--dm_get_suspended_counter@plt
| | __run_exit_handlers
| |
| |--0.60%--free@plt
| | 0
| |
| |--0.60%--0x1580
| | __run_exit_handlers
| |
| |--0.55%--__tz_compute
| | 0
| |
| |--0.55%--0x6020
| | __run_exit_handlers
| |
| |--0.55%--__do_global_dtors_aux
| | __run_exit_handlers
| |
| |--0.53%--0x25ae4
| | __run_exit_handlers
| |
| |--0.53%--0x11a16
| | 0x10302
| | 0xa8ee
| |
| |--0.53%--dm_get_suspended_counter
| | __run_exit_handlers
| |
| |--0.53%--ped_device_free_all@plt
| | __run_exit_handlers
| |
| |--0.53%--__do_global_dtors_aux
| | __run_exit_handlers
| |
| |--0.53%--0x1620
| | __run_exit_handlers
| |
| |--0.50%--__cxa_finalize@plt
| | _dl_fini
| | __run_exit_handlers
| |
| --0.50%--0x1910
| __run_exit_handlers
|
|--1.72%--filemap_map_pages
| __handle_mm_fault
| handle_mm_fault
| __do_page_fault
| do_page_fault
| page_fault
|
--1.54%--__handle_mm_fault
handle_mm_fault
__do_page_fault
do_page_fault
page_fault
|
|--0.69%--memset
| _dl_map_object
|
--0.64%--_dl_relocate_object
dl_main
_dl_sysdep_start
0x40
Thanks,
Kan
Powered by blists - more mailing lists