[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <37D7C6CF3E00A74B8858931C1DB2F07753786CE9@SHSMSX103.ccr.corp.intel.com>
Date: Thu, 17 Aug 2017 16:17:40 +0000
From: "Liang, Kan" <kan.liang@...el.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>,
Tim Chen <tim.c.chen@...ux.intel.com>
CC: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...e.hu>,
"Andi Kleen" <ak@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
linux-mm <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH 1/2] sched/wait: Break up long wake list walk
> On Mon, Aug 14, 2017 at 5:52 PM, Tim Chen <tim.c.chen@...ux.intel.com>
> wrote:
> > We encountered workloads that have very long wake up list on large
> > systems. A waker takes a long time to traverse the entire wake list
> > and execute all the wake functions.
> >
> > We saw page wait list that are up to 3700+ entries long in tests of
> > large
> > 4 and 8 socket systems. It took 0.8 sec to traverse such list during
> > wake up. Any other CPU that contends for the list spin lock will spin
> > for a long time. As page wait list is shared by many pages so it
> > could get very long on systems with large memory.
>
> I really dislike this patch.
>
> The patch seems a band-aid for really horrible kernel behavior, rather than
> fixing the underlying problem itself.
>
> Now, it may well be that we do end up needing this band-aid in the end, so
> this isn't a NAK of the patch per se. But I'd *really* like to see if we can fix the
> underlying cause for what you see somehow..
>
> In particular, if this is about the page wait table, maybe we can just make the
> wait table bigger. IOW, are people actually waiting on the
> *same* page, or are they mainly waiting on totally different pages, just
> hashing to the same wait queue?
>
> Because right now that page wait table is a small fixed size, and the only
> reason it's a small fixed size is that nobody reported any issues with it -
> particularly since we now avoid the wait table entirely for the common cases
> by having that "contention" bit.
>
> But it really is a *small* table. We literally have
>
> #define PAGE_WAIT_TABLE_BITS 8
>
> so it's just 256 entries. We could easily it much bigger, if we are actually
> seeing a lot of collissions.
>
> We *used* to have a very complex per-zone thing for bit-waitiqueues, but
> that was because we got lots and lots of contention issues, and everybody
> *always* touched the wait-queues whether they waited or not (so being per-
> zone was a big deal)
>
> We got rid of all that per-zone complexity when the normal case didn't hit in
> the page wait queues at all, but we may have over-done the simplification a
> bit since nobody showed any issue.
>
> In particular, we used to size the per-zone thing by amount of memory.
> We could easily re-introduce that for the new simpler page queues.
>
> The page_waitiqueue() is a simple helper function inside mm/filemap.c, and
> thanks to the per-page "do we have actual waiters" bit that we have now, we
> can actually afford to make it bigger and more complex now if we want to.
>
> What happens to your load if you just make that table bigger? You can
> literally test by just changing the constant from 8 to 16 or something, making
> us use twice as many bits for hashing. A "real"
> patch would size it by amount of memory, but just for testing the contention
> on your load, you can do the hacky one-liner.
Hi Linus,
We tried both 12 and 16 bit table and that didn't make a difference.
The long wake ups are mostly on the same page when we do instrumentation
Here is the wake_up_page_bit call stack when the workaround is running, which
is collected by perf record -g -a -e probe:wake_up_page_bit -- sleep 10
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 374 of event 'probe:wake_up_page_bit'
# Event count (approx.): 374
#
# Overhead Trace output
# ........ ..................
#
100.00% (ffffffffae1ad000)
|
---wake_up_page_bit
|
|--49.73%--migrate_misplaced_transhuge_page
| do_huge_pmd_numa_page
| __handle_mm_fault
| handle_mm_fault
| __do_page_fault
| do_page_fault
| page_fault
| |
| |--28.07%--0x2b7b7
| | |
| | |--13.64%--0x127a2
| | | 0x7fb5247eddc5
| | |
| | |--13.37%--0x127d8
| | | 0x7fb5247eddc5
| | |
| | |--0.53%--0x1280e
| | | 0x7fb5247eddc5
| | |
| | --0.27%--0x12844
| | 0x7fb5247eddc5
| |
| |--18.18%--0x2b788
| | |
| | |--14.97%--0x127a2
| | | 0x7fb5247eddc5
| | |
| | |--1.34%--0x1287a
| | | 0x7fb5247eddc5
| | |
| | |--0.53%--0x128b0
| | | 0x7fb5247eddc5
| | |
| | |--0.53%--0x1280e
| | | 0x7fb5247eddc5
| | |
| | |--0.53%--0x127d8
| | | 0x7fb5247eddc5
| | |
| | --0.27%--0x12844
| | 0x7fb5247eddc5
| |
| |--1.07%--0x2b823
| | |
| | |--0.53%--0x127a2
| | | 0x7fb5247eddc5
| | |
| | |--0.27%--0x1287a
| | | 0x7fb5247eddc5
| | |
| | --0.27%--0x127d8
| | 0x7fb5247eddc5
| |
| |--0.80%--0x2b88f
| | |
| | --0.53%--0x127d8
| | 0x7fb5247eddc5
| |
| |--0.80%--0x2b7f4
| | |
| | |--0.53%--0x127d8
| | | 0x7fb5247eddc5
| | |
| | --0.27%--0x127a2
| | 0x7fb5247eddc5
| |
| |--0.53%--0x2b8fb
| | 0x127a2
| | 0x7fb5247eddc5
| |
| --0.27%--0x2b8e9
| 0x127a2
| 0x7fb5247eddc5
|
|--44.12%--__handle_mm_fault
| handle_mm_fault
| __do_page_fault
| do_page_fault
| page_fault
| |
| |--30.75%--_dl_relocate_object
| | dl_main
| | _dl_sysdep_start
| | 0x40
| |
| --13.37%--memset
| _dl_map_object
| |
| |--2.94%--_etext
| |
| |--0.80%--0x7f34ea294b08
| | 0
| |
| |--0.80%--0x7f1d5fa64b08
| | 0
| |
| |--0.53%--0x7fd4c83dbb08
| | 0
| |
| |--0.53%--0x7efe3724cb08
| | 0
| |
| |--0.27%--0x7ff2cf0b69c0
| | 0
| |
| |--0.27%--0x7fc9bc22cb08
| | 0
| |
| |--0.27%--0x7fc432971058
| | 0
| |
| |--0.27%--0x7faf21ec2b08
| | 0
| |
| |--0.27%--0x7faf21ec2640
| | 0
| |
| |--0.27%--0x7f940f08e058
| | 0
| |
| |--0.27%--0x7f4b84122640
| | 0
| |
| |--0.27%--0x7f42c8fd7fd8
| | 0
| |
| |--0.27%--0x7f3f15778fd8
| | 0
| |
| |--0.27%--0x7f3f15776058
| | 0
| |
| |--0.27%--0x7f34ea27dfd8
| | 0
| |
| |--0.27%--0x7f34ea27b058
| | 0
| |
| |--0.27%--0x7f2a0409bb08
| | 0
| |
| |--0.27%--0x7f2a04084fd8
| | 0
| |
| |--0.27%--0x7f2a04082058
| | 0
| |
| |--0.27%--0x7f1949633b08
| | 0
| |
| |--0.27%--0x7f194961cfd8
| | 0
| |
| |--0.27%--0x7f1629f87b08
| | 0
| |
| |--0.27%--0x7f1629f70fd8
| | 0
| |
| |--0.27%--0x7f1629f6e058
| | 0
| |
| |--0.27%--0x7f060696eb08
| | 0
| |
| |--0.27%--0x7f04ac14c9c0
| | 0
| |
| |--0.27%--0x7efe8b4bbb08
| | 0
| |
| |--0.27%--0x7efe8b4a59c0
| | 0
| |
| |--0.27%--0x7efe8b4a4fd8
| | 0
| |
| |--0.27%--0x7efe8b4a2058
| | 0
| |
| |--0.27%--0x7efcd0c70b08
| | 0
| |
| |--0.27%--0x207ad8
| | 0
| |
| --0.27%--0x206b30
| 0
|
|--2.14%--filemap_map_pages
| __handle_mm_fault
| handle_mm_fault
| __do_page_fault
| do_page_fault
| page_fault
| |
| |--0.53%--_IO_vfscanf
| | |
| | |--0.27%--0x6563697665442055
| | |
| | --0.27%--_IO_vsscanf
| | 0x6563697665442055
| |
| |--0.53%--_dl_map_object_from_fd
| | _dl_map_object
| | |
| | |--0.27%--0x7faf21ec2640
| | | 0
| | |
| | --0.27%--_etext
| |
| |--0.27%--__libc_enable_asynccancel
| | __fopen_internal
| | 0x6d6f6f2f30373635
| |
| |--0.27%--vfprintf
| | _IO_vsprintf
| | 0x4
| |
| |--0.27%--0x1fb40
| | 0x41d589495541f689
| |
| --0.27%--memset@plt
|
|--1.87%--do_huge_pmd_numa_page
| __handle_mm_fault
| handle_mm_fault
| __do_page_fault
| do_page_fault
| page_fault
| |
| |--0.80%--0x2b7b7
| | 0x127d8
| | 0x7fb5247eddc5
| |
| |--0.80%--0x2b788
| | 0x127a2
| | 0x7fb5247eddc5
| |
| --0.27%--0x2b918
| 0x127d8
| 0x7fb5247eddc5
|
|--1.87%--migrate_pages
| migrate_misplaced_page
| __handle_mm_fault
| handle_mm_fault
| __do_page_fault
| do_page_fault
| page_fault
| |
| |--1.07%--0x2b7b7
| | |
| | |--0.80%--0x127d8
| | | 0x7fb5247eddc5
| | |
| | --0.27%--0x1287a
| | 0x7fb5247eddc5
| |
| --0.80%--0x2b788
| 0x127a2
| 0x7fb5247eddc5
|
--0.27%--do_wp_page
__handle_mm_fault
handle_mm_fault
__do_page_fault
do_page_fault
page_fault
_IO_link_in
Thanks,
Kan
Powered by blists - more mailing lists