[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFxuO1r1riZ=5dO9NtvWOhGQdKHfhfCTuahoOTjN_yd6UA@mail.gmail.com>
Date: Fri, 18 Aug 2017 13:34:37 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: "Liang, Kan" <kan.liang@...el.com>
Cc: Mel Gorman <mgorman@...hsingularity.net>,
Mel Gorman <mgorman@...e.de>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...e.hu>, Andi Kleen <ak@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
linux-mm <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] sched/wait: Break up long wake list walk
On Fri, Aug 18, 2017 at 1:29 PM, Liang, Kan <kan.liang@...el.com> wrote:
> Here is the profiling with THP disabled for wait_on_page_bit_common and
> wake_up_page_bit.
>
>
> The call stack of wait_on_page_bit_common
> # Overhead Trace output
> # ........ ..................
> #
> 100.00% (ffffffff821aefca)
> |
> ---wait_on_page_bit
> __migration_entry_wait
> migration_entry_wait
> do_swap_page
Ok, so it really is exactly the same thing, just for a regular page,
and there is absolutely nothing huge-page specific to this.
Thanks.
If you can test that (hacky, ugly) yield() patch, just to see how it
behaves (maybe it degrades performance horribly even if it then avoids
the long wait queues), that would be lovely.
Does the load actually have some way of measuring performance? Because
with the yield(), I'd hope that all the wait_on_page_bit() stuff is
all gone, but it might just *perform* horribly badly.
Linus
Powered by blists - more mailing lists