[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170818144622.oabozle26hasg5yo@techsingularity.net>
Date: Fri, 18 Aug 2017 15:46:22 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: "Liang, Kan" <kan.liang@...el.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...e.hu>, Andi Kleen <ak@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
linux-mm <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] sched/wait: Break up long wake list walk
On Fri, Aug 18, 2017 at 02:20:38PM +0000, Liang, Kan wrote:
> > Nothing fancy other than needing a comment if it works.
> >
>
> No, the patch doesn't work.
>
That indicates that it may be a hot page and it's possible that the page is
locked for a short time but waiters accumulate. What happens if you leave
NUMA balancing enabled but disable THP? Waiting on migration entries also
uses wait_on_page_locked so it would be interesting to know if the problem
is specific to THP.
Can you tell me what this workload is doing? I want to see if it's something
like many threads pounding on a limited number of pages very quickly. If
it's many threads working on private data, it would also be important to
know how each buffers threads are aligned, particularly if the buffers
are smaller than a THP or base page size. For example, if each thread is
operating on a base page sized buffer then disabling THP would side-step
the problem but THP would be false sharing between multiple threads.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists