[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFxZjjqUM4kPvNEeZahPovBHFATiwADj-iPTDN0-jnU67Q@mail.gmail.com>
Date: Fri, 18 Aug 2017 10:48:23 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: "Liang, Kan" <kan.liang@...el.com>
Cc: Mel Gorman <mgorman@...hsingularity.net>,
Mel Gorman <mgorman@...e.de>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...e.hu>, Andi Kleen <ak@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
linux-mm <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] sched/wait: Break up long wake list walk
On Fri, Aug 18, 2017 at 9:53 AM, Liang, Kan <kan.liang@...el.com> wrote:
>
>> On Fri, Aug 18, 2017 Mel Gorman wrote:
>>
>> That indicates that it may be a hot page and it's possible that the page is
>> locked for a short time but waiters accumulate. What happens if you leave
>> NUMA balancing enabled but disable THP?
>
> No, disabling THP doesn't help the case.
Interesting. That particular code sequence should only be active for
THP. What does the profile look like with THP disabled but with NUMA
balancing still enabled?
Just asking because maybe that different call chain could give us some
other ideas of what the commonality here is that triggers out
behavioral problem.
I was really hoping that we'd root-cause this and have a solution (and
then apply Tim's patch as a "belt and suspenders" kind of thing), but
it's starting to smell like we may have to apply Tim's patch as a
band-aid, and try to figure out what the trigger is longer-term.
Linus
Powered by blists - more mailing lists